Iterability of exp(x)-1
#11
jaydfox Wrote:The logarithm is "entire" for reals > 0 (not quite "entire", but you know what I mean), but it's standard power series diverges outside the range (0, 2]. Analytic extension is used outside that range. So this function might only converge for x<=1.
Dont tell nonsense. Log has a singularity at 0, thatswhy it is not entire (thatswhy its radius of convergence is 1, if developed at 1).
Thats the whole thing about entire functions, they have no singularities and hence their radius of convergence is infinity.
Analytic continuation is necessary only if there are complex singularities.

exp,sin,cos are entire functions, their series converge for arbitrary large arguments.

Quote:Besides, you do know what the formula equals at x=5, don't you? Well, for starters, at x=4, it's approximately 5.03481484682034616908489989276E+41. Bear in mind, this function is equal to the second iterated logarithm (base e) of my cheta function, shifted by a constant in the x direction. So it has tetrational growth.
I was talking about \( x+x^2 \), do we talk about the same?

Quote:It's like trying to solve sin(x) using the power series, and then saying it appear divergent because you tried to solve the power series for x=100

sin converges for arbitrary x, however it does not satisfy Walker's condition because the coefficients are partly negative.

Quote:So no worries. Just make sure it converges for x=3, which should equal 96.0223655650268799109865292599.

Yes it does also not converge for \( x=1.0 \), but we seem anyway not to talk about the same thing ...
Quote: 0, 1.0, 1.500000000, 1.250000000, 1.500000000, 1.187500000, 1.609375000, 1.046875000, 1.714843750, 1.175781250, 0.8930664062, 3.502685547, -3.853393555, 10.37207032, -8.21649169

But of course it could be that it stabilizes for greater n, however the computation is quite resource using, so I can not see it above 15.
#12
bo198214 Wrote:
jaydfox Wrote:The logarithm is "entire" for reals > 0 (not quite "entire", but you know what I mean), but it's standard power series diverges outside the range (0, 2]. Analytic extension is used outside that range. So this function might only converge for x<=1.
Dont tell nonsense. Log has a singularity at 0, thatswhy it is not entire (thatswhy its radius of convergence is 1, if developed at 1).
Thats the whole thing about entire functions, they have no singularities and hence their radius of convergence is infinity.
Analytic continuation is necessary only if there are complex singularities.
"nonsense" is a bit harsh, and anyway, I made explicit I was talking about reals greater than 0. ln(5) is very well-defined, but the series defined around x=1 diverges for any x value greater than 2. My point, which should have been very clear, is that a function can be continued by analytic extension, even if its power series fails to converge for inputs that should be non-singular values. So long as a function has a non-zero radius of convergence for the Taylor series defined at a certain point, and so long as analytic extension is possible and well-defined for all non-singular points, then we can find the general solution.

And I see now where you specified that your series was for the half-iteration of x+x^2. I was reading too quickly, I guess...
~ Jay Daniel Fox
#13
But now I see my mistake. If \( f \) is entire of course \( f^{-1} \) has not to be entire (for example exp is entire but ln not).
So the iterate need not to be entire, although the Abel function is entire.
Also mentioned in the paper of Erdos and Jabotinsky is a Master's thesis and it is claimed, that there is shown that \( z+z^2 \) is not iterable... but who knows ...

This applies also to the iterates of \( e^x-1 \), they need not to be entire. (seem however to have convergence radius)
#14
jaydfox Wrote:My point, which should have been very clear, is that a function can be continued by analytic extension, even if its power series fails to converge for inputs that should be non-singular values.

But the claim was about entirety, there is not "a bit entire" and "quite entire". Analytic extension is not necessary there. If a function is entire the powerseries converges at each point. If it does not, maybe we can help ourselves by analytic extension but it is not entire then.
#15
bo198214 Wrote:If a function is entire the powerseries converges at each point. If it does not, maybe we can help ourselves by analytic extension but it is not entire then.
Hmm, my calculus professors must have glossed over that particular detail. That'd be useful to know. I didn't realize that the only reason a power series would fail to converge for a valid input is because the function is not entire. By the way, entire over the reals or entire over the complex numbers? Does it matter if the only singularities are non-real? (I realize this is a side-topic.)
~ Jay Daniel Fox
#16
The thing is that the convergence radius is exclusively limited by singularities. If you have any analytic function and develop it at a certain point into a powerseries, then the distance to the nearest singularity is the radius of convergence. For example if you develop log at the point 0.3 then its radius of convergence is 0.3 because there is a singularity at 0.
On the real axis however you dont see all the singularities lurking in the complex. For example you can have an analytic function that has no singularities on the real axis, however it is not entire because regardless at which point you develop there is the singularity in the complex that limits the radius of convergence.
#17
Okay, so the iteration function for e^z-1 might have a radius of convergence limited by complex singularities (since it seems obvious to me that it shouldn't have any real singularities). I haven't given much thought yet to complex hyperexponents. I don't see a reason that the work I've done so far can't be extended, now that I have a power series for the iteration function. So long as it has non-zero radius of convergence, I can test it out with complex iterations and see what happens...
~ Jay Daniel Fox
#18
I've been looking at some graphs of the root-test: \( \left(\lim_{k \rightarrow \infty} |c_k|^{1/k}\right) \) for the coefficients of fractional-iterates of the natural decremented exponential, and I'm starting to believe Baker over Walker, i.e., that it does in fact diverge for non-integers.

Here are the graphs, (using \( DE(x) = e^x - 1 \)):
In order for these functions to converge, the root-test must be bounded, and as you can see the non-integer root-tests seem to be unbounded, supporting Baker.

Andrew Robbins
#19
andydude Wrote:I've been looking at some graphs of the root-test: \( \left(\lim_{k \rightarrow \infty} |c_k|^{1/k}\right) \) for the coefficients of fractional-iterates of the natural decremented exponential,

How could you compute coefficients for these large indexes?
With my algorithm I merely reach 15 in a reasonable time, and not 50!
#20
I am able to compute them so fast because of 4 secrets: Smile
  1. I have Mathematica.
  2. I know that (IDM+Interpolation) is faster than (PDM+Powers)
  3. I use the Double-Binomial instead of Lagrange Interpolation.
  4. I use a custom IDM method.
With (PDM+Powers), you are effectively computing the interpolation of a triangular matrix, which is \( \frac{n(n+1)}{2} \) interpolations when we only need \( n \) interpolations. This indicated to me that the IDM method was a better way to go. After I realized that, it was only a matter of comparing the timings of Lagrange Interpolation vs. Double-Binomial expansion to see which was faster.

First, I will start with the straight-forward Mathematica code:

Code:
size = 10;

coeff = Join[{0}, Map[InterpolatingPolynomial[#, t] &,
  Transpose@Table[Series[Nest[(Exp[#] - 1) &, x, k],
  {x, 0, size}][[3]], {k, 1, size}]]];

TableForm@Expand@coeff

and now for the highly-optimized Mathematica code:

Code:
(* This is not built-in for some reason... *)
Unprotect[Binomial];
Binomial[t_, k_Integer?Positive] := Product[t - j, {j, 0, k - 1}]/k!;
Protect[Binomial];

(* This assumes x0 == 0 *)
ParabolicIDM[expr_, {x_, 0, size_}] := Module[{ret, ser},
  ret = {};
  ser = Series[x, {x, 0, size}];
  Do[
    ser = Series[expr /. x -> Normal[ser], {x, 0, size}];
    AppendTo[ret, Join[{0}, ser[[3]]]],
  {size + 1}];
  Join[{UnitVector[size + 1, 2]}, ret]
];

(* Then use the custom IDM as follows *)
size = 50;
matrix = ParabolicIDM[Exp[x] - 1, {x, 0, size}];
coeff = Table[Sum[(-1)^(n - k - 1) matrix[[k + 1, n + 1]]
  Binomial[t, k] Binomial[t - k - 1, n - k - 1],
  {k, 0, n - 1}], {n, 0, size}];

(* Then do what you like to 'coeff', for example: *)
TableForm@Expand@coeff[[1 ;; 20]]

Using this code, I find that I can get 100 coefficients in under 25 sec, and 150 coefficients in under 2 mins. As you can see, the custom IDM method is truncating the series at every step, so you're not saving the iteration of a 50-degree polynomial composed with a 50-degree polynomial which otherwise would be 2500-degree polynomial only after the second iterate. This "chopping" keeps the iterations within the size you are interested in, so you aren't calculating things that you don't care about.

Andrew Robbins




Users browsing this thread: 2 Guest(s)