Half-iterate exp(z)-1: hypothese on growth of coefficients
#1
Update 22.7.22: i replaced the pictures by a more "pedagogical" turn and took the original pictures to the end. Hopefully made the text accordingly clearer              

Hi,   

this is a bit older, but has not yet been discussed anywhere - I moreover don't think, the today's MSE or MO would like that question...

- - - - - - -

I.N. Baker has proved in the 1950'ies that the powerseries of any fractional iterates of  \( f(z) = \exp(z)-1 \) have convergence radius \( \rho = 0\).         
From this, for instance, Erdös/Jabotinsky took the stance: "no real fractional iterate of \( f(z) \) exists".   (Citation-sources at my homepage, subpage "tetdocs")          

Well, older members here might remember that I took procedures of divergent summation to actually compute approximations, for instance for the power series for the half-iterate \( g(g(z))=f(z) \quad \) ("regular iteration") - - - not to mention the option to look at the powerseries as an asymptotic one and take by this the partial sum up to best approximation....               
The use of that divergent summation procedures was always due to inspection of partial sums up to 32,64,128 or 256 coefficients, and also handwoven (well, mostly very good heuristics).  

To have a chance at all to make this more precise, ...
.... I tried to first find some estimate of the growthrate of the coefficients in \( g(z) \).

First see the coefficients \( g_k \) for \( k=0..45 \)
   
The initial coefficients are all small in absolute values, and short inspections of that problem might be misleading about the overall tendency.

Now look at the coefficients for \( k=0..1023 \) . I made an \( \sinh°^{-1}() \)-scaling for the y-axis of the picture to have a milder growthrate and much better overview of the drawn curves:
   
It is clearly a hypergeometrical growth in the sequence of coefficients, so of course this series should -by inspection of this 1024 first coefficients only should have convergenceradius zero - but it is not clear of this is all what can be said.

Sometime in 2016 I found a very suggestive good estimate: the curve of the coefficients \( g_k \) at index \( k \) multiplied with my reciprocal guess-function \( A(k) \) is extremely clearly bounded by a constant value of about \( \pm 10.01 \)  .            
   

- - - - - - - - - - - - - -

So since the limiting guess-function \(A(k) \) has a factorial in \( k \) in the numerator, Eulerian summation cannot be applied, and because there is nothing else that makes the growthrate more extreme, a Borel-summation (and likely my experimental Nörlund-summation) should indeed be sufficient to assign finite real values to the evaluation of \( g(z) \).          

If this can be shown to be correct/appropriate, then P. Erdös'/E. Jabotinsky's unconditioned verdict seems to be overwritten.  

Thus there are two aspects that I'd like to be solved (at any time in future...) :

  1. is that bounding function "good"? 
  2. What type of divergent-summation-procedure can be expected to sum that (strongly) divergent sum? (I simply used some Nörlund-like procedure based on heuristics & inspection)

I don't expect an answer to question 2. here in the forum, since divergent summation is a complete different field (and I've only peeked into it, at least finding some operationable versions).   
But perhaps for question 1. someone might have an idea.             

- - - - - - - -
Even if not, we might let this post hang around for possibly later ideas/forum visitors engagement. After tinkering with it I did no more expect MO being an appropriate place for questions of that style.


Gottfried              

Appendix:
Here is the picture where moreover I document the scaled coefficients in 4 separate curves according to \( k \pmod 4 \). Maybe that four curves -or better: the four functions defined by the 4 "multi-section-series" - have any valuable properties on their own and might be analyzed using Fourier-analysis or else. See the image:
   


When the coefficients of second and third of the multi-section-series change their sign, then the two new common, sinusoidal curves even improve their shape  ...
   
Gottfried Helms, Kassel
#2
There is a reason for subdividing the sequence of coefficients of the (diverging) series of the fractional iterate into 4 subsequences and not...say 3 or 6 or n subsequences?


Read at risk of losing your time.

Let me start by saying that I'm totally a newbie with sums, and convergence, let alone evaluating divergent sums.
But the second question seems fascinating.

So I'll start with my childish feeling I got reading your post... if \(\sum d_kx^k\) diverges but somehow you can compute approximations, as naive as I can be, I understant that you are assigning to every \(x\) a value \({\rm Gottfried}(x)=G(x)\) in a way that up to some large \(N\), given by your computational power, we have \(|\sum_{k=0}^Nd_kx^k-G(x)|\leq \epsilon\) for some \(\epsilon\), i.e. the partial sum is not too much different from your approxximation.

As if there is something disturbing convergence, causing Baker's result, but the divergence is not so wild to make impossible for you to compute approximations.
So can we know if the partial sums are really wandering around the unknown real values like disturbed by some unknown "wind/disturbing force fiedl"?

So, a naive person like me, without background in computation nor summations, would ask: define the limit class of a sequence as \(\mathcal E(x_n)\) as the set of points that are the limits of some subsequence \(x_{n_k}\). The worst way \(x_n\) can diverge is when the limit class is empty.
Then I'd ask for \(\mathcal E(\sum d_nx^n)\) for each \(x\)... it doesnt matter it doesn't converge... but if it is nonempty for some \(x\in U\) then I'll look for a coherent way to chose from every \(\mathcal E_x\) a value in it.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
#3
(07/19/2022, 09:40 PM)MphLee Wrote: There is a reason for subdividing the sequence of coefficients of the (diverging) series of the fractional iterate into 4 subsequences and not...say 3 or 6 or n subsequences?
Hi MphLee - 
 
thank you very much for considering this post!
And for the questions - no loss of time.

The reason for the separation is 99% because of appearence of the unseparated plot, resp. the list of coefficients. Looking at the coefficients-list, it is "obvious" (for me), that the 4-partioning reduces the apparent chaos in the list of coefficients (at the top of all is the effect of the periodicities of sign-changes - to smooth this a 4-partition is immediately useful). So: this is only heuristic.
However, having 4 partial sequences, this might moreover reflect some effect in the question of negative \(z \) and imaginary \( z \). This might then be interesting in its own...

(07/19/2022, 09:40 PM)MphLee Wrote:
So I'll start with my childish feeling I got reading your post... if \(\sum d_kx^k\) diverges but somehow you can compute approximations, as naive as I can be, I understant that you are assigning to every \(x\) a value \({\rm Gottfried}(x)=G(x)\) in a way that up to some large \(N\), given by your computational power, we have \(|\sum_{k=0}^Nd_kx^k-G(x)|\leq \epsilon\) for some \(\epsilon\), i.e. the partial sum is not too much different from your approxximation.

Well, the process is a bit different for such summations.

The most simple case is that of alternating geometric series, for instance \( f(x)= 1-x + x^2-x^3+... \). This series has radius of convergence \( \rho = 1 \) because for \( | x| <1 \) this converges (possibly veeeery slow ) and for \( |x| \ge 1 \) diverges (again possibly veeeery slow).
On the other hand, the series can be understood to be result of \( f^*(x) = 1/(1+x) \) and there is no problem to insert then \( x=2 \) for instance and have the result \( f^*(2) = 1/3 \) . Perhaps this is some axiomatic anchor for all the more sophisticated summations of divergent series.
Anyway, this is surely a "singular" observation and, at least since Euler, it has been used to extend the range of functions which can be summed to meaningful values, even if the powerseries for some \( x \) is divergent.         
Known names (with simple procedures) here are Hölder- and Cesaro summation, which apply evaluation of partial series up to some index \(n \), and average then that partial-series-values and see, whether this goes to some fixed limit. Iteration improves the "power": stronger diverging series can be summed.

However: only up to certain divergence criteria.  For instance, if the coefficients of some powerseries grow geometrically, neither Hölder- nor Cesaro sum (the latter: I think) do not converge - how many iterations you may apply. For such series the Euler-summation is sufficient, as well possibly iterated.  

For alternating factorial series, L. Euler found some evaluation-scheme, but the best known "(classical)" method for them is perhaps the Borel-summation.
This is remarkable, since \(f(x) = 0! - 1!x + 2!x^2 - 3!x^4 \pm \cdots \) has convergence-radius \( \rho =0 \): with no \(x\), being as small as you want, the series can converge. And still there is a meaningful evaluation to a finite value known!   

There are many more, and more modern summation-methods known, but I couldn't specialize on this subject, and just fiddled with that homebrewn procedure of my own -often simply generalizations: to make the summation-procedure "stronger" while keeping its basic logic - methods to approximate meaningful values.        
It is a subject that I love much, and if interested I might prepare some zoom- or skype-meeting to show that at work using Pari/GP and my matrices... :-)

Well, there are series, which are proven that they cannot be summed to some meaningful finite value that way at all, one of the examples is, if I recall correctly, a series with coefficients of this growth: \( f(x) = a + a^2 x + a^{2^2} x^2 + a^{2^3} x^3 + ... \text{with } \; a\gt 1 \) whether alternating or non-alternating signs are given. (But take me not at the word with this, this is from year old memory....).                         

So the statement of Erdös/Jabotinsky "no real fractional iterate for \( \exp(z)-1 \) " due to zero-radius of convergence has been a challenge to me, whether I can find some summation procedure for that series. (See my early more-or-less newbie discussion at https://go.helms-net.de/math/tetdocs/htm...ration.htm) Possibly there is indeed none; but my approximation suggests to me, that the growth of coefficients is somehow factorial ("hypergeometric") and then should be summable by Borel or similar procedures (see the formula for the \( a_k \) there are only factorial and geometric terms in it).  The best so far summation procedure can be seen in http://go.helms-net.de/math/tetdocs/Coef...ration.htm where I show a handful of summations of integer and of the half-integer powerseries (with \( x=1 \) , see pictures with blue background and yellow numbers)       

Buttttt - most books are really difficult to read, not much paedagogic, exotic jargons, lacking redundancies etc, except for instance that of K. Knopp, that of G. Hardy and the like, all such "classical" texts. Modern texts about this, as far as I met them, are usually too hard to chew for me, so I've settled to dwell as amateur... :-)

Gottfried

------
An, and a very lowlevel introduction into summation of divergent series maybe this one, however again: I've been rather newbie with this): https://go.helms-net.de/math/summation/pmatrix.pdf chap. 5
Gottfried Helms, Kassel
#4
Okay, so I'm going to go out on a limb, and assume you mean that:

\[
f(z) = \exp(z) -1\\
\]

is expanded about \(z =0\), right?


I do have a long history of working with Divergent sums and Umbral Calculus and sum and series speed ups. So I'm very very happy to look into this. But I'm confused by what the bounding function is? I apologize, but I'm a tad confused.

I'd suggest an abel summation personally. And I'll explain how to do it.

Let's let \(\mathbb{E} = \{z \in \mathbb{C}\,|\, |z+\delta| < \delta\}\), so that \(0\) is on the boundary of \(\mathbb{E}\). This is just a small delta sized disk right next to \(0\), choose \(\delta\) as small as you want.

Then we know there is a holomorphic function \(g(z)\) which is holomorphic on \(\mathbb{E}\) and satisfies:

\[
g(g(z)) = e^{z} - 1\\
\]

Additionally we know that \(\lim_{z\to 0} g(z) = 0\), which is honestly just the statement that \(\eta\uparrow \uparrow z \to e\) as \(z \to \infty\), upto conjugation.

Then Abel's theorem should tell you that the power series of \(g(z)\) about \(z+\delta = 0\) is abel summable; which is confusing because it has nothing to do with how this forum uses abel functions. Abel's theorem is a very different beast. Nonetheless:

\[
\begin{align}
g(z) &= \sum_{k=0}^\infty g_k(\delta)\frac{(z+\delta)^k}{k!}\,\,\text{for}\,\, |z+\delta| < \delta\\
0 &= \lim_{z\to 0}\sum_{k=0}^\infty g_k(\delta) \frac{(z+\delta)^k}{k!}\\
\end{align}
\]

Because there is a petal about \(0\) where \(g \to 0\) which encircles \(\mathbb{E}\) for small enough \(\delta\) (We might have to finesse this a bit).

It also tells you summing the divergent series should be very doable, where as you let \(\delta \to 0\) you are approaching the coefficients you are bounding. (this is something to do with the Stolz area about the fixed point, this should definitely be doable, but Stolz has a specific condition that I'd have to reread up on to describe it).

Certainly you can use Abel sums to get a convergent/divergent series here (Again, nothing to do with the Abel function, another analysis idea by abel). It'll be different than what Norlund does, but it should work--be equivalent results. And intrinsic to that it should give a bound on the coefficients themselves. I'll try to think about this more...

This is an interesting problem, I'll have to think about it...




I apologize for all the edits, but I keep thinking of a clearer way of saying the things I'm saying. So EDIT!!!!:::::::

Thought I'd point out that you can equivalently consider \(\mathbb{E}^- = \{ z \in \mathbb{C}\,|\,|z-\delta| < \delta\}\) and the function \(g^{-1}(z)\) such that:

\[
g^{-1}(g^{-1}(z)) = \log(1+z)\\
\]

And now the Abel summation works from \(z \to 0\) from the right to left direction. Same principle though.
#5
(07/20/2022, 10:49 PM)JmsNxn Wrote: Okay, so I'm going to go out on a limb, and assume you mean that:

\[
f(z) = \exp(z) -1\\
\]
is expanded about \(z =0\), right?

I do have a long history of working with Divergent sums and Umbral Calculus and sum and series speed ups. So I'm very very happy to look into this. But I'm confused by what the bounding function is? I apologize, but I'm a tad confused.

I'd suggest an abel summation personally. And I'll explain how to do it. (...)

Hmm, I think there must be an error somewhere; for the powerseries for fractional iteration of \(f(z)=\exp(z)-1 \) the convergence-radius is not a small \( \delta \)-environment around zero, but exactly zero - that's not my discovery but that of I.N.Baker (and I could not yet understand fully his proof); so I think an Abel-summation should not work.    

I learned the term "limiting a divergent series" or "bounding a divergent series" from the handbook of Konrad Knopp; there he described this as a prerequisite to find an appropriate/powerfule-enough summation-procedure at all - and the growthrate of the coefficients of \( f°^h(z) \) must be estimated by his argument.   
That was the problem that I set myself up to solve: find a function \( A(k) \) (where \(k \) indicates the index of the coefficients in \( f°^h(z) \) ) which is always greater than the \( k \)'th coefficient in \( f°^h(z) \).   
After this - decide for the appropriate summation-technique.  If \( A(k) \) has "geometrical" growth (with \(k \) ) then Euler summation is appropriate/powerful enough, if it has "hypergeometric" growth then Borel-, or other powerful summation techniques are needed.           
This is a (sloppy) paraphrase of Knopp's explanations in chap 13 and 14 - and he didn't (if I recall it correctly) show any summation "stronger" than the Borel-summation; so when we possibly have coefficients in \( f°^h(z) \) growing more than factorially with their index, then there is no classical summation in his book to manage this (except perhaps his chapter about the Euler-MacLaurin summation and the use of asymptotic series, the latter which has explicitely been applied in this forum ... ).                

The function \( A(k) \) that I describe here has factorially growth (plus some geometric growth component) so any \( k \)'th coefficient in \( f°^h(z) \) is inside the bounds given by \( A(k) \) . What I've simply documented in my picture are the values \( a_k \) (and their structural expression) . Then -to have a sanity check- the quotient of the k'th coefficient in \( f°^h(z) \) with \( a_k \). The amazing thing in this is, that my proposal for the \(a_k\) seem lead to an upper limit in the same way as a sinus-curve has one.                 

Well - maybe this can be expressed far better, or even simpler. The literature I had was only Euler, Hardy, Knopp  and then some diverse articles, but no coherent or modern course...  so don't mind to correct me if I've got something basically wrong, I'd like it much if I could settle my experimental results into appropriate form someday :-) ...

Gottfried
Gottfried Helms, Kassel
#6
@Gottfreid txh for you great answer

(07/19/2022, 10:19 PM)Gottfried Wrote: However, having 4 partial sequences, this might moreover reflect some effect in the question of negative \(z \) and imaginary \( z \). This might then be interesting in its own...
Can we derive from this regular behavior modulo 4 that modulo 4 and the coefficient themselves, even if they produce divergence, are following some kind of law, have meaning?
It is that that you are pointing to?


Quote:Well, the process is a bit different for such summations.

The most simple case is that of alternating geometric series, for instance \( f(x)= 1-x + x^2-x^3+... \). This series has radius of convergence \( \rho = 1 \) because for \( | x| <1 \) this converges (possibly veeeery slow ) and for \( |x| \ge 1 \) diverges (again possibly veeeery slow).   
On the other hand, the series can be understood to be result of \( f^*(x) = 1/(1+x) \) and there is no problem to insert then \( x=2 \) for instance and have the result \( f^*(2) = 1/3 \) . Perhaps this is some axiomatic anchor for all the more sophisticated summations of divergent series.           
Anyway, this is surely a "singular" observation and, at least since Euler, it has been used to extend the range of functions which can be summed to meaningful values, even if the powerseries for some \( x \) is divergent.         
Known names (with simple procedures) here are Hölder- and Cesaro summation, which apply evaluation of partial series up to some index \(n \), and average then that partial-series-values and see, whether this goes to some fixed limit. Iteration improves the "power": stronger diverging series can be summed.

However: only up to certain divergence criteria.  For instance, if the coefficients of some powerseries grow geometrically, neither Hölder- nor Cesaro sum (the latter: I think) do not converge - how many iterations you may apply. For such series the Euler-summation is sufficient, as well possibly iterated.  

For alternating factorial series, L. Euler found some evaluation-scheme, but the best known "(classical)" method for them is perhaps the Borel-summation.
This is remarkable, since \(f(x) = 0! - 1!x + 2!x^2 - 3!x^4 \pm \cdots \) has convergence-radius \( \rho =0 \): with no \(x\), being as small as you want, the series can converge. And still there is a meaningful evaluation to a finite value known!   
Oh, so I was not totally off... average of the truncations is something pretty intuitive that also a kid could come up with and similat to what I was suggesting (average of accumulation points of the subsequence of the truncations).

Anyways... it is a very rich zoo of summation methods...


Quote:There are many more, and more modern summation-methods known, but I couldn't specialize on this subject, and just fiddled with that homebrewn procedure of my own -often simply generalizations: to make the summation-procedure "stronger" while keeping its basic logic - methods to approximate meaningful values.        
It is a subject that I love much, and if interested I might prepare some zoom- or skype-meeting to show that at work using Pari/GP and my matrices... :-)
I'd be happy but I'm not sure I have the prerequisites and also the time needed to apply on this. Life is short and time is a tirant.. I hate this since it sounds as a great topic... and now that you described it many things popped to my head, things that are linked to... like zeta functions and combinatorics (generating functions-ology) and quantum theory stuff...

But again, no time... and I fear I'll never be able to catch up with all the prerequisite. It is already odyssey to try to make a little sense of dynamics and category theory as an autodidact... I'd need an university and a way to go there... damn... maybe when I'll be 70yo I'll get my phd. xD


Quote:Buttttt - most books are really difficult to read, not much paedagogic, exotic jargons, lacking redundancies etc, except for instance that of K. Knopp, that of G. Hardy and the like, all such "classical" texts. Modern texts about this, as far as I met them, are usually too hard to chew for me, so I've settled to dwell as amateur... :-)

Gottfried   

------
An, and a very lowlevel introduction into summation of divergent series maybe this one, however again: I've been rather newbie with this): https://go.helms-net.de/math/summation/pmatrix.pdf chap. 5

I'll try to skim Knopp in the free time.
Btw... chapter 5 is the "Online res(S)ources" part... and I suspect you are pointing me to the geman languange bibliography items...are you?

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
#7
Well, @MphLee - thank you as well for your kind answer!

(07/21/2022, 12:15 PM)MphLee Wrote:
(07/19/2022, 10:19 PM)Gottfried Wrote: However, having 4 partial sequences, this might moreover reflect some effect in the question of negative \(z \) and imaginary \( z \). This might then be interesting in its own...   
Can we derive from this regular behavior modulo 4 that modulo 4 and the coefficient themselves, even if they produce divergence, are following some kind of law, have meaning?
It is that that you are pointing to?               

Well, I just put a vague idea... If we separate the powerseries of the exp() in two partial series, then the partial series are as well interesting functions (sinh() and cosh() )  and if in the two partial series we change each second coefficient's sign we have sin() and cos() instead, which relate to sinh() and cosh() by using imaginary instead real arguments. As said: just a vague idea, perhaps there might something similar interesting come out ... if researched effectively...
But the central topic is: 1) do we know a functional bound for the coefficients depending on the index \( k\) at all? (I.N. Baker said in his 1958 article: "we have no apriori knowledge about their growthrate" and I've never seen some estimate of this) 2) can we use a known summation method, for instance Borel-summation, such that we have also a formally proved arbitrarily-approximatable procedure, or that we can build a new one based on our knowledge of that specific growth-rate. - - - (Btw.: the separation into four partial series is not really important here: the limiting is of course valid for the unsegmented series as well) A big plan, likely too big for me, but my heuristic is simply too nice...

(07/21/2022, 12:15 PM)MphLee Wrote: Anyways... it is a very rich zoo of summation methods...
:-) Yes, sure. Has been too much for me to investigate seriously when I had also to do my job. And to accompany my family. And to be pappa for my child in home. :-)


Quote:
Gottfried Wrote:(...) and if interested I might prepare some zoom- or skype-meeting to show that at work using Pari/GP and my matrices... :-)
I'd be happy but I'm not sure I have the prerequisites and also the time needed to apply on this (...)

Nothing to problematize. I thought only about something like for a complete beginner, how such summations work - in principle, with examples. If sometimes someone liked that idea he/she might come back to this :-)

Quote:Btw... chapter 5 is the "Online res(S)ources" part... and I suspect you are pointing me to the geman languange bibliography items...are you?

Sorry, this was meant chap 3 (in my article).        

And while I'm scanning my old hobby-treatizes I think there are some even better workouts, however I never completed a stand-alone text on this subject.
This subject can really make one dizzy. I even thought to find a summation for \( su= 10 - 10^{10} + 10^{10^{10}} - \,^4 10 + \,^5 10 \pm \cdots \pm \) to a finite value, maybe a completely dizzy procedure having googol and googolplex like numbers already in the very first terms of the series...   :-)                   
Ehhmm - btw. the Knopp-book is also existent in english translation, and maybemaybemaybe even accessible online, don't know. (The german version of his book was/is accessible via digitizing-center of university of Göttingen, maybe this is perhaps a hint...)


- - - - -


To make the matter now a bit less prominent: having strong loss of energy since several monthes now I'm only skimming through my early elaborations and word-docs and if I find something worth to be looked at, which I did not present earlier here in the forum (but which is related to our matter), I think I'll add it "to the pipeline" for someone who might be interested by chance (or has a knack for historical matters)... Hmm, maybe a subforum for such thoughts might even be more appropriate...

Kind regards -

Gottfried
Gottfried Helms, Kassel
#8
(07/21/2022, 01:13 PM)Gottfried Wrote: Well, I just put a vague idea... If we separate the powerseries of the exp() in two partial series, then the partial series are as well interesting functions (sinh() and cosh() )  and if in the two partial series we change each second coefficient's sign we have sin() and cos() instead, which relate to sinh() and cosh() by using imaginary instead real arguments. As said: just a vague idea, perhaps there might something similar interesting come out ... if researched effectively...   
Wow, this sounds suggestive. Like tetrational trig functions or something like that, since we can use \(f\) to recover a tetration.
Quote:But the central topic is: 1) do we know a functional bound for the coefficients depending on the index \( k\) at all? (I.N. Baker said in his 1958 article: "we have no apriori knowledge about their growthrate" and I've never seen some estimate of this) 2) can we use a known summation method, for instance Borel-summation, such that we have also a formally proved arbitrarily-approximatable procedure, or that we can build a new one based on our knowledge of that specific growth-rate. - - -  (Btw.: the separation into four partial series is not really important here: the limiting is of course valid for the unsegmented series as well)  A big plan, likely too big for me, but my heuristic is simply too nice...
Maybe it is important because it allows one to express the overall chaos as a sum/superposition of waves of with different phases...? It is the ignorant part of me talking here.

Quote::-) Yes, sure. Has been too much for me to investigate seriously when I had also to do my job. And to accompany my family. And to be pappa for my child in home. :-)
The important things first... even if sometimes math can lure us into thinking she is our Beatrice... bring us to hell...

Quote:Nothing to problematize. I thought only about something like for a complete beginner, how such summations work - in principle, with examples. If sometimes someone liked that idea he/she might come back to this :-)

Ok, if more ppl are interested count me in, theoretically... in practice I'll be in vacation very soon so I'll be back in September... and then I don't know what will be my situation for what concerns weekends and days off from work..

Quote:And while I'm scanning my old hobby-treatizes I think there are some even better workouts, however I never completed a stand-alone text on this subject.
[...]
To make the matter now a bit less prominent: having strong loss of energy since several months now I'm only skimming through my early elaborations and word-docs and if I find something worth to be looked at, which I did not present earlier here in the forum (but which is related to our matter), I think I'll add it "to the pipeline" for someone who might be interested by chance (or has a knack for historical matters)... Hmm, maybe a subforum for such thoughts might even be more appropriate...

Given my lack of time for serious research I also started during the last year or so skimming trhu my old stuff... trying to save things and also making archives of interesting matierial. For example I have a complete archive of Romerio's, Rubtsov's, Trappmann's and JmsNxn's papers. Also was able to save most of the literature of this forum about hos and iteration. I was starting these weeks trying to complete my archive also with all your works(eg. on matrices) and most of my forum contributions.

Some questions arise... I'm saving all this stuff... will I ever be able to read/use it? xD But I like the idea that I'm saving knowledge.
Another question is: was my contribute and my body of notes worth of something? hahah damn... I'm not old (not young anymore) yet I feel the pressure of time.

All of this to say that... idk what do you have in mind.. I love historical matters because I think that the historical perspective is often the best order to impose on complex matters that makes things comprehensible.
So just open a new thread about that subforum idea, let's talk about it.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
#9
(07/21/2022, 07:30 AM)Gottfried Wrote:
(07/20/2022, 10:49 PM)JmsNxn Wrote: Okay, so I'm going to go out on a limb, and assume you mean that:

\[
f(z) = \exp(z) -1\\
\]
is expanded about \(z =0\), right?

I do have a long history of working with Divergent sums and Umbral Calculus and sum and series speed ups. So I'm very very happy to look into this. But I'm confused by what the bounding function is? I apologize, but I'm a tad confused.

I'd suggest an abel summation personally. And I'll explain how to do it. (...)

Hmm, I think there must be an error somewhere; for the powerseries for fractional iteration of \(f(z)=\exp(z)-1 \) the convergence-radius is not a small \( \delta \)-environment around zero, but exactly zero - that's not my discovery but that of I.N.Baker (and I could not yet understand fully his proof); so I think an Abel-summation should not work.    

I learned the term "limiting a divergent series" or "bounding a divergent series" from the handbook of Konrad Knopp; there he described this as a prerequisite to find an appropriate/powerfule-enough summation-procedure at all - and the growthrate of the coefficients of \( f°^h(z) \) must be estimated by his argument.   
That was the problem that I set myself up to solve: find a function \( A(k) \) (where \(k \) indicates the index of the coefficients in \( f°^h(z) \) ) which is always greater than the \( k \)'th coefficient in \( f°^h(z) \).   
After this - decide for the appropriate summation-technique.  If \( A(k) \) has "geometrical" growth (with \(k \) ) then Euler summation is appropriate/powerful enough, if it has "hypergeometric" growth then Borel-, or other powerful summation techniques are needed.           
This is a (sloppy) paraphrase of Knopp's explanations in chap 13 and 14 - and he didn't (if I recall it correctly) show any summation "stronger" than the Borel-summation; so when we possibly have coefficients in \( f°^h(z) \) growing more than factorially with their index, then there is no classical summation in his book to manage this (except perhaps his chapter about the Euler-MacLaurin summation and the use of asymptotic series, the latter which has explicitely been applied in this forum ... ).                

The function \( A(k) \) that I describe here has factorially growth (plus some geometric growth component) so any \( k \)'th coefficient in \( f°^h(z) \) is inside the bounds given by \( A(k) \) . What I've simply documented in my picture are the values \( a_k \) (and their structural expression) . Then -to have a sanity check- the quotient of the k'th coefficient in \( f°^h(z) \) with \( a_k \). The amazing thing in this is, that my proposal for the \(a_k\) seem lead to an upper limit in the same way as a sinus-curve has one.                 

Well - maybe this can be expressed far better, or even simpler. The literature I had was only Euler, Hardy, Knopp  and then some diverse articles, but no coherent or modern course...  so don't mind to correct me if I've got something basically wrong, I'd like it much if I could settle my experimental results into appropriate form someday :-) ...

Gottfried

Oh no I'm well aware it's exactly zero. Otherwise the series wouldn't be a divergent series!!!!


What I'm saying is that if you take a disk \(\mathbb{E}\) centered at \(-\delta\) of radius \(\delta\), then there is a function \(g(z)\) that is a holomorphic functional square root. Abel sum this at \(z = 0 \)!

This should be equivalent to everything you are doing. Then you have more freedom to bound the taylor terms...

So you would have a holomorphic function \(g(z)\) on \(\mathbb{E} = \{z \in \mathbb{C}\,|\,|z+\delta| < \delta\}\), such that:

\[
g(z) = \sum_{k=0}^\infty g_k(\delta) (z+\delta)^n\\
\]

such that \(g(g(z)) = e^z-1\).

Then if you expand this series about \(z=0\) (you can abel sum it), you should get the same divergent series you are talking about. That's what I meant. You should be able to get a \(k!\) bound from that. You'd have to check:

\[
g^N(z) = \sum_{k=0}^N g_k(\delta) (z+\delta)^k = \sum_{k=0}^N d_k z^k\\
\]

But this should definitely happen. Then all you have to do is find out how to bound the \(g_k\)--which translates to a bound on \(d_k\). You can instantly guess that \(g_k = O(1/\delta^k)\), then a \(k!\) will pull out from the binomial expansion.




EDIT:

Also, I think you should be able to get a similar integral transform as euler for this. I'd have to double check some literature.

But just as:

\[
e\int_0^1 \frac{e^{-1/x}}{x}\,dx = 1! - 2! + 3! - 4!... = \sum_{k=1}^\infty (-1)^{k+1} k!\\
\]

You should be able to make an integral transform summation. I believe we call this Borel summation now, but I could be mistaken, I forget the names of people a lot, lol. This is largely because Borel (?) summation is a direct continuation of Abel summation--and these expansions always exist. They basically just use the same tricks as euler, but with some added finesse. Give me a bit, I'll get it...

EDIT2:

OKAY! YES! It's borel summation!

OKay, I'm too sleepy now to do it, but I can bound \(d_k = O(k!)\). I'm busy tomorrow, but I believe I can pull that out for you by sunday. I'll double check everything. But this shouldn't be too hard.

Essentially you just take:

\[
\mathcal{B} g(z) = \sum_{k=0}^\infty g_k(\delta)\frac{(z+\delta)^k}{k!}\\
\]

Then you would get some thing close to:

\[
\sum_{k=1}^\infty d_k\frac{z^k}{k!}\\
\]

And you can show that this has a non-trivial radius of convergence, it'll probably be about \(1\), and therefore \(d_k = \mathcal{O}(k!)\). It might be a bit different though. It might be closer to \(\mathcal{O}(b^kk!)\), or something like that. I have to run the numbers.. Too damn lazy and sleepy rn.

But you should be able to get an expansion that looks like:

\[
\int_0^\infty \frac{e^{-t}}{1+tz}\,dt = \sum_{k=0}^\infty (-1)^k k! z^k\\
\]

And that expression defining \(g(z)\) (whatever it may be) should converge for \(\Re(z) < 0\). Jesus it's been a while since I've done a lot of this stuff, so I may be missing some dumb constants and stuff. Give me til sunday.

Regards, I'm going to bed.
#10
Yes, for the factorial-type series we have the Borel-summation, or even the iterated one, as K. Knopp writes it. Knopp distinguishes between Borel's integration-based method and the "simple" method (under the same name Borel).  
My problem so far was that I didn't see anywhere a reliable estimate of the coefficients growthrate. so I had to explore this myself, arriving at the guess for a limiting function A(k) as mentioned in my first post in this thread - which of course were a basis for that Borel (or my experimental) method. (I don't remember why, but once in the meantime since I had the impression, the growthrate were more than hypergeometric, and thus Erdös's statement would have made sense. But I couldn't after years not reproduce the reason why I thought so).

I would like to have your (or someone's else) derivation to fix the case for sufficiency Borel-summation for some citeable statement - but as well if even it is only for the better statement in my webspace on tet.-docs... :-)

Hope you've had nice dreams - and read you at sunday -

Gottfried
Gottfried Helms, Kassel


Possibly Related Threads…
Thread Author Replies Views Last Post
  logit coefficients growth pattern bo198214 21 7,020 09/09/2022, 03:00 AM
Last Post: tommy1729
Question Repeated Differentiation Leading to Tetrationally Fast Growth Catullus 5 2,228 07/16/2022, 07:26 AM
Last Post: tommy1729
  Why the beta-method is non-zero in the upper half plane JmsNxn 0 1,420 09/01/2021, 01:57 AM
Last Post: JmsNxn
  Half-iterates and periodic stuff , my mod method [2019] tommy1729 0 3,194 09/09/2019, 10:55 PM
Last Post: tommy1729
  Approximation to half-iterate by high indexed natural iterates (base on ShlThrb) Gottfried 1 4,680 09/09/2019, 10:50 PM
Last Post: tommy1729
  Between exp^[h] and elementary growth tommy1729 0 3,584 09/04/2017, 11:12 PM
Last Post: tommy1729
  Does tetration take the right half plane to itself? JmsNxn 7 17,117 05/16/2017, 08:46 PM
Last Post: JmsNxn
  Half-iteration of x^(n^2) + 1 tommy1729 3 10,428 03/09/2017, 10:02 PM
Last Post: Xorter
  Uniqueness of half-iterate of exp(x) ? tommy1729 14 36,458 01/09/2017, 02:41 AM
Last Post: Gottfried
  Taylor polynomial. System of equations for the coefficients. marraco 17 38,096 08/23/2016, 11:25 AM
Last Post: Gottfried



Users browsing this thread: 1 Guest(s)