Posts: 440
Threads: 31
Joined: Aug 2007
Since I don't have Mathematica, could you by chance post graphs of the root test to 150 or 200 terms for halfiterates (1/2 for starters, 1/2, etc., if not too much trouble)? I'm curious to see if it continues what appears to be a linear climb. I've downloaded Paritty and am hoping to learn how to use it over the next few days, but it'll be a while before I'm up and running.
By the way, the noninfinite radius of convergence for negative halfiterates is to be *expected*, because those involve partial logarithms, and those will have a radius of convergence (as can be seen by the iterate 1).
Also, looking at the graph for the 1/2 iterate, it would seem that the first 50 terms should behave convergently for z values less than 0.6 or so. Considering that I plan to use z values less than 0.01, this is way more than sufficient for several hundred digits of accuracy, especially if I go out to 100 terms or so.
~ Jay Daniel Fox
Posts: 1,389
Threads: 90
Joined: Aug 2007
andydude Wrote:I am able to compute them so fast because of 4 secrets:
 I have Mathematica.
What a pitty that I neither have Mathematica nor speak Mathematica and so can not use your excellent code and have to program it instead myself in Maple ...
Posts: 509
Threads: 44
Joined: Aug 2007
I can port it to Maple and Sage, but it will take about a week.
I recommend Sage for poor mathematicians, because it is free, and combines dozens of opensource CASs.
For more roottests for the coefficients, see:
And for more of the coefficients themselves, see:
Andrew Robbins
Posts: 440
Threads: 31
Joined: Aug 2007
Good, it looks linear, should make it easy to write a function to compute a radius of initial convergence, as well as exactly how many terms of the power series are required to reach a desired level of precision. I plan to stay well within half the radius of initial convergence (probably 1/4th to be safe), so this is useful info.
By the way, have you looked at quarter iterates to make sure there aren't any surprises lurking? I don't think it'll matter (might cause the oscillations to shift, but shouldn't affect the overall linearity too badly), but it'd be good to know. Once I'm up and running with a good math library, I can run these tests myself, of course...
By the way, how do rate PARI/gp versus Sage. Should I just go with Sage? I'm already trying to learn PARI/gp (I'm trying out Gottfried's Paritty interface), but it's slow going.
~ Jay Daniel Fox
Posts: 1,389
Threads: 90
Joined: Aug 2007
08/15/2007, 09:13 PM
(This post was last modified: 08/15/2007, 09:14 PM by bo198214.)
It now looks anyway as if there is no Baker vs. Walker.
As Peter Walker told me, Baker introduced Walker into the topic and also read Walker's paper. So from that view it is quite unprobable that there is a contradiction.
But from the pure mathematical point of view there is perhaps also no contradiction: Walker showed that for fixed the function is entire. Baker however showed that for most fixed the convergence radius of the function (where ) is 0 at . I think that can go together.
jaydfox Wrote:By the way, how do rate PARI/gp versus Sage. Should I just go with Sage? I'm already trying to learn PARI/gp (I'm trying out Gottfried's Paritty interface), but it's slow going.
Can you please open a new thread in the computing subforum? It is also a topic I am interested in.
Posts: 1,389
Threads: 90
Joined: Aug 2007
I just got an answer from Peter Walker
Peter L. Walker Wrote:The difference between Noel's paper and mine is as follows. His is concerned with a particular iterate of e^x  1, namely
(e^x  1)^[1/2], while mine is concerned with an 'exponential of iteration' (to use Szekeres' notation, for which see the paper of mine in the Proc. AMS 1990 which I mentioned before, and the references there), i.e. a solution F of the functional equation, in this case
F(x + 1) = h(F(x)) = e^(F(x))  1,
so the two things are not meant to be the same.
Of course the solution F can be used to construct iterates via the definition
h_a (x) = F(G(x) + a)
where G is F inverse, provided that a suitable inverse exists.
In this case the Proc AMS paper shows that F is entire (with however some very bizarre properties). It does map the real axis strictly monotonically onto (0,infinity) so that there is a welldefined realanalytic inverse G on a neighbourhood of (0,infinity). The difficulty is that G has a very nasty singularity at the origin so that the above definition for h_1/2 makes no sense at 0, and it is not surprising that the solution obtained from equating coefficients in the power series has radius of convergence zero.
Another way of looking at this is to say that e^x  1 can be iterated for nonintegers, but only for x > 0.
Posts: 509
Threads: 44
Joined: Aug 2007
I would love to talk to Walker about the superlogarithm, because as far as I know he was the first to consider an Abel function approximation as I have. Also, the coefficients he gives for the natural (b=e) superlogarithm are identical (as far as approximations go) to my coefficients even though we use vastly different methods.
Peter Walker uses an iterated function in such a way that the infinitefold iteration of that function represents an exact solution to the Abel function, and at the end of Walker's paper Infinitely Differentiable Generalized Logarithmic and Exponential Functions, he mentions that he has also tried a "matrix method" to obtain similar results, which I suspect is exactly the method that I found independently.
Andrew Robbins
Posts: 1,389
Threads: 90
Joined: Aug 2007
andydude Wrote:I would love to talk to Walker about the superlogarithm, because as far as I know he was the first to consider an Abel function approximation as I have. You can do it, his email is on his university page
http://www.aus.edu/cas/maths/staff/peter_walker.php
However my impression is that now as he is retired he is no more that engaged about the topic as he perhaps was earlier.
Quote:Also, the coefficients he gives for the natural (b=e) superlogarithm are identical (as far as approximations go) to my coefficients even though we use vastly different methods.
Peter Walker uses an iterated function in such a way that the infinitefold iteration of that function represents an exact solution to the Abel function, and at the end of Walker's paper Infinitely Differentiable Generalized Logarithmic and Exponential Functions, he mentions that he has also tried a "matrix method" to obtain similar results, which I suspect is exactly the method that I found independently.
He writes in this paper that the coefficients for his first method differ from the coefficients he derived from the matrix method (which he quickly describes in this paper too).
Posts: 761
Threads: 118
Joined: Aug 2007
Occasionally I reread older threads, sometimes one finds ome gems in them of which one got not aware earlier. It seems to me, it might be useful to give this thread some conclusive response to also help other occasional readers to sort out and weight the scattered examples, arguments and conclusions.
In the table below I document the first 64 terms of the powerseries for f°(1/2)(x) where f(x)=exp(x)1. The powerseries for f°(1/2)(x) was destilled using the matrixlogarithm of the matrix S2 (rescaled Stirling numbers 2'nd kind), which provides the powerseries for f(x)=exp(x)1 .
The first column contains the terms in rational representation and shows the same coefficients, which Henryk provided as an example. However, Henryk's example ended at some small index, but the continuation shows, that the terms of the powerseries actually diverge from a certain index. The rate of growth of terms is then roughly hypergeometric (guessed by inspection).
The second column shows the same terms in real arithmetic.
The third column shows the partial sums up to <rownumber> of terms; this also implies, that simply x=1 is assumed here. After a good approximation this sequence diverges, too, so indeed f°(1/2)(1) cannot simply be summed to a limit value.
The last column shows partial sums using an extension of Eulersummation. Since common Eulersummation of any order can only sum series with geometric growth, I used a variant, which (hopefully) compensates this hypergeometric growth (method not documented/discussed yet) and indeed seems to be able to transform the series into a convergent one, providing the result of
f°(1/2)(1)~ 1.2710274
Gottfried
Code: ´
0 0 0 0
1 1 1 0.45454545
1/4 0.25000000 1.2500000 0.74050542
1/48 0.020833333 1.2708333 0.92356508
0 0.E810 1.2708333 1.0421115
1/3840 0.00026041667 1.2710938 1.1195105
7/92160 0.000075954861 1.2710178 1.1703562
1/645120 0.0000015500992 1.2710193 1.2039209
53/3440640 0.000015404111 1.2710347 1.2261669
0.0000090745391 0.0000090745391 1.2710257 1.2409615
0.000000082819971 0.000000082819971 1.2710256 1.2508300
0.0000036074073 0.0000036074073 1.2710292 1.2574303
0.0000016951497 0.0000016951497 1.2710275 1.2618553
0.0000013308992 0.0000013308992 1.2710262 1.2648287
0.0000017752144 0.0000017752144 1.2710279 1.2668308
0.00000037035398 0.00000037035398 1.2710283 1.2681815
0.0000019147568 0.0000019147568 1.2710264 1.2690944
0.00000034467343 0.00000034467343 1.2710267 1.2697124
0.0000024191341 0.0000024191341 1.2710292 1.2701316
0.0000014770587 0.0000014770587 1.2710277 1.2704162
0.0000036046260 0.0000036046260 1.2710241 1.2706099
0.0000042603060 0.0000042603060 1.2710283 1.2707418
0.0000061940178 0.0000061940178 1.2710345 1.2708318
0.000012625293 0.000012625293 1.2710219 1.2708933
0.000011736089 0.000011736089 1.2710102 1.2709353
0.000041395229 0.000041395229 1.2710516 1.2709641
0.000022203030 0.000022203030 1.2710738 1.2709839
0.00015310857 0.00015310857 1.2709207 1.2709974
0.000027832787 0.000027832787 1.2708928 1.2710067
0.00064101866 0.00064101866 1.2715339 1.2710131
0.00011130752 0.00011130752 1.2714225 1.2710175
0.0030302667 0.0030302667 1.2683923 1.2710206
0.0016766696 0.0016766696 1.2700690 1.2710227
0.016095115 0.016095115 1.2861641 1.2710241
0.015708416 0.015708416 1.2704556 1.2710251
0.095480465 0.095480465 1.1749752 1.2710258
0.13948961 0.13948961 1.3144648 1.2710263
0.62852065 0.62852065 1.9429854 1.2710267
1.2769417 1.2769417 0.66604378 1.2710269
4.5595640 4.5595640 3.8935202 1.2710270
12.402772 12.402772 8.5092522 1.2710272
36.185455 36.185455 44.694707 1.2710272
129.30559 129.30559 84.610888 1.2710273
311.60844 311.60844 396.21933 1.2710273
1453.7164 1453.7164 1057.4971 1.2710274
2883.7550 2883.7550 3941.2521 1.2710274
17648.606 17648.606 13707.354 1.2710274
28323.267 28323.267 42030.620 1.2710274
231312.84 231312.84 189282.22 1.2710274
289837.71 289837.71 479119.93 1.2710274
3269336.0 3269336.0 2790216.0 1.2710274
2992168.6 2992168.6 5782384.6 1.2710274
49750634. 49750634. 43968250. 1.2710274
28980063. 28980063. 72948313. 1.2710274
8.1361647E8 8.1361647E8 7.4066816E8 1.2710274
2.0196159E8 2.0196159E8 9.4262976E8 1.2710274
1.4271686E10 1.4271686E10 1.3329057E10 1.2710274
1.3254909E9 1.3254909E9 1.2003566E10 1.2710274
2.6797851E11 2.6797851E11 2.5597494E11 1.2710274
1.1931979E11 1.1931979E11 1.3665515E11 1.2710274
5.3756367E12 5.3756367E12 5.2389815E12 1.2710274
4.3701305E12 4.3701305E12 8.6885108E11 1.2710274
1.1497780E14 1.1497780E14 1.1410895E14 1.2710274
1.3795199E14 1.3795199E14 2.3843037E13 1.2710274
Gottfried Helms, Kassel
Posts: 1,389
Threads: 90
Joined: Aug 2007
Thats something!
Can you explain your modified euler summation?
