Posts: 1,384
Threads: 90
Joined: Aug 2007
08/13/2007, 09:33 PM
(This post was last modified: 08/13/2007, 09:34 PM by bo198214.)
jaydfox Wrote:The logarithm is "entire" for reals > 0 (not quite "entire", but you know what I mean), but it's standard power series diverges outside the range (0, 2]. Analytic extension is used outside that range. So this function might only converge for x<=1. Dont tell nonsense. Log has a singularity at 0, thatswhy it is not entire (thatswhy its radius of convergence is 1, if developed at 1).
Thats the whole thing about entire functions, they have no singularities and hence their radius of convergence is infinity.
Analytic continuation is necessary only if there are complex singularities.
exp,sin,cos are entire functions, their series converge for arbitrary large arguments.
Quote:Besides, you do know what the formula equals at x=5, don't you? Well, for starters, at x=4, it's approximately 5.03481484682034616908489989276E+41. Bear in mind, this function is equal to the second iterated logarithm (base e) of my cheta function, shifted by a constant in the x direction. So it has tetrational growth.
I was talking about , do we talk about the same?
Quote:It's like trying to solve sin(x) using the power series, and then saying it appear divergent because you tried to solve the power series for x=100
sin converges for arbitrary x, however it does not satisfy Walker's condition because the coefficients are partly negative.
Quote:So no worries. Just make sure it converges for x=3, which should equal 96.0223655650268799109865292599.
Yes it does also not converge for , but we seem anyway not to talk about the same thing ...
Quote: 0, 1.0, 1.500000000, 1.250000000, 1.500000000, 1.187500000, 1.609375000, 1.046875000, 1.714843750, 1.175781250, 0.8930664062, 3.502685547, 3.853393555, 10.37207032, 8.21649169
But of course it could be that it stabilizes for greater n, however the computation is quite resource using, so I can not see it above 15.
Posts: 440
Threads: 31
Joined: Aug 2007
bo198214 Wrote:jaydfox Wrote:The logarithm is "entire" for reals > 0 (not quite "entire", but you know what I mean), but it's standard power series diverges outside the range (0, 2]. Analytic extension is used outside that range. So this function might only converge for x<=1. Dont tell nonsense. Log has a singularity at 0, thatswhy it is not entire (thatswhy its radius of convergence is 1, if developed at 1).
Thats the whole thing about entire functions, they have no singularities and hence their radius of convergence is infinity.
Analytic continuation is necessary only if there are complex singularities. "nonsense" is a bit harsh, and anyway, I made explicit I was talking about reals greater than 0. ln(5) is very welldefined, but the series defined around x=1 diverges for any x value greater than 2. My point, which should have been very clear, is that a function can be continued by analytic extension, even if its power series fails to converge for inputs that should be nonsingular values. So long as a function has a nonzero radius of convergence for the Taylor series defined at a certain point, and so long as analytic extension is possible and welldefined for all nonsingular points, then we can find the general solution.
And I see now where you specified that your series was for the halfiteration of x+x^2. I was reading too quickly, I guess...
~ Jay Daniel Fox
Posts: 1,384
Threads: 90
Joined: Aug 2007
But now I see my mistake. If is entire of course has not to be entire (for example exp is entire but ln not).
So the iterate need not to be entire, although the Abel function is entire.
Also mentioned in the paper of Erdos and Jabotinsky is a Master's thesis and it is claimed, that there is shown that is not iterable... but who knows ...
This applies also to the iterates of , they need not to be entire. (seem however to have convergence radius)
Posts: 1,384
Threads: 90
Joined: Aug 2007
08/13/2007, 10:05 PM
(This post was last modified: 08/13/2007, 10:10 PM by bo198214.)
jaydfox Wrote:My point, which should have been very clear, is that a function can be continued by analytic extension, even if its power series fails to converge for inputs that should be nonsingular values.
But the claim was about entirety, there is not "a bit entire" and "quite entire". Analytic extension is not necessary there. If a function is entire the powerseries converges at each point. If it does not, maybe we can help ourselves by analytic extension but it is not entire then.
Posts: 440
Threads: 31
Joined: Aug 2007
bo198214 Wrote:If a function is entire the powerseries converges at each point. If it does not, maybe we can help ourselves by analytic extension but it is not entire then. Hmm, my calculus professors must have glossed over that particular detail. That'd be useful to know. I didn't realize that the only reason a power series would fail to converge for a valid input is because the function is not entire. By the way, entire over the reals or entire over the complex numbers? Does it matter if the only singularities are nonreal? (I realize this is a sidetopic.)
~ Jay Daniel Fox
Posts: 1,384
Threads: 90
Joined: Aug 2007
08/13/2007, 10:22 PM
(This post was last modified: 08/13/2007, 10:23 PM by bo198214.)
The thing is that the convergence radius is exclusively limited by singularities. If you have any analytic function and develop it at a certain point into a powerseries, then the distance to the nearest singularity is the radius of convergence. For example if you develop log at the point 0.3 then its radius of convergence is 0.3 because there is a singularity at 0.
On the real axis however you dont see all the singularities lurking in the complex. For example you can have an analytic function that has no singularities on the real axis, however it is not entire because regardless at which point you develop there is the singularity in the complex that limits the radius of convergence.
Posts: 440
Threads: 31
Joined: Aug 2007
08/13/2007, 10:29 PM
(This post was last modified: 08/13/2007, 10:36 PM by jaydfox.)
Okay, so the iteration function for e^z1 might have a radius of convergence limited by complex singularities (since it seems obvious to me that it shouldn't have any real singularities). I haven't given much thought yet to complex hyperexponents. I don't see a reason that the work I've done so far can't be extended, now that I have a power series for the iteration function. So long as it has nonzero radius of convergence, I can test it out with complex iterations and see what happens...
~ Jay Daniel Fox
Posts: 509
Threads: 44
Joined: Aug 2007
I've been looking at some graphs of the roottest: for the coefficients of fractionaliterates of the natural decremented exponential, and I'm starting to believe Baker over Walker, i.e., that it does in fact diverge for nonintegers.
Here are the graphs, (using ):
In order for these functions to converge, the roottest must be bounded, and as you can see the noninteger roottests seem to be unbounded, supporting Baker.
Andrew Robbins
Posts: 1,384
Threads: 90
Joined: Aug 2007
08/13/2007, 10:39 PM
(This post was last modified: 08/13/2007, 10:44 PM by bo198214.)
andydude Wrote:I've been looking at some graphs of the roottest: for the coefficients of fractionaliterates of the natural decremented exponential,
How could you compute coefficients for these large indexes?
With my algorithm I merely reach 15 in a reasonable time, and not 50!
Posts: 509
Threads: 44
Joined: Aug 2007
I am able to compute them so fast because of 4 secrets:
 I have Mathematica.
 I know that (IDM+Interpolation) is faster than (PDM+Powers)
 I use the DoubleBinomial instead of Lagrange Interpolation.
 I use a custom IDM method.
With (PDM+Powers), you are effectively computing the interpolation of a triangular matrix, which is interpolations when we only need interpolations. This indicated to me that the IDM method was a better way to go. After I realized that, it was only a matter of comparing the timings of Lagrange Interpolation vs. DoubleBinomial expansion to see which was faster.
First, I will start with the straightforward Mathematica code:
Code: size = 10;
coeff = Join[{0}, Map[InterpolatingPolynomial[#, t] &,
Transpose@Table[Series[Nest[(Exp[#]  1) &, x, k],
{x, 0, size}][[3]], {k, 1, size}]]];
TableForm@Expand@coeff
and now for the highlyoptimized Mathematica code:
Code: (* This is not builtin for some reason... *)
Unprotect[Binomial];
Binomial[t_, k_Integer?Positive] := Product[t  j, {j, 0, k  1}]/k!;
Protect[Binomial];
(* This assumes x0 == 0 *)
ParabolicIDM[expr_, {x_, 0, size_}] := Module[{ret, ser},
ret = {};
ser = Series[x, {x, 0, size}];
Do[
ser = Series[expr /. x > Normal[ser], {x, 0, size}];
AppendTo[ret, Join[{0}, ser[[3]]]],
{size + 1}];
Join[{UnitVector[size + 1, 2]}, ret]
];
(* Then use the custom IDM as follows *)
size = 50;
matrix = ParabolicIDM[Exp[x]  1, {x, 0, size}];
coeff = Table[Sum[(1)^(n  k  1) matrix[[k + 1, n + 1]]
Binomial[t, k] Binomial[t  k  1, n  k  1],
{k, 0, n  1}], {n, 0, size}];
(* Then do what you like to 'coeff', for example: *)
TableForm@Expand@coeff[[1 ;; 20]]
Using this code, I find that I can get 100 coefficients in under 25 sec, and 150 coefficients in under 2 mins. As you can see, the custom IDM method is truncating the series at every step, so you're not saving the iteration of a 50degree polynomial composed with a 50degree polynomial which otherwise would be 2500degree polynomial only after the second iterate. This "chopping" keeps the iterations within the size you are interested in, so you aren't calculating things that you don't care about.
Andrew Robbins
