# Tetration Forum

Full Version: U-tetration or "decremented exponential iteration"
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hi -

concerning the iteration of the function f(x)=exp(x)-1 I came across the remark of Erdös & Jabotinsky, that due to I.N.Baker a fractional version f°h(x) would not exist. This was stumbling me, since I just had computed lots of nice values for those iterates... In the refered article Baker actually proves that only integer iterates have positive radius of convergence.
Well - phew - with this I can live better than with the Erdös/Jabotinsky-statement.

I've just uploaded some tables, which may illustrate the problem as stated by Baker and give some more specific insight.
No big affair, again, but may be leading into deeper consideration.
see http://go.helms-net.de/math/tetdocs/html...ration.htm

[update]
Compare also my earlier article with a consideration for the half-iterate at http://go.helms-net.de/math/tetdocs/Coef...ration.htm
[/update]

Gottfried

P.s. The article of Baker is online at digicenter university of göttingen.
Yes, there are other posts about this, because it is common to slip on this aspect. I think Henryk said it best in this post particularly. I think I have seen something similar in tetration itself as opposed to iter-dec-exp, but I'm not sure how to phrase it.

I first saw it when I was following Galidakis' research with his Puiseux series expansions about log(b). Galidakis uses series about the base b, whereas natural iteration uses series expansions about the iterator t in $\exp_b^t(x)$. With the linear approximation to tetration, as well as higher approximations, the table that Galidakis gives for the Puiseux series expansions of ${}^{t}(b)$ will give a discontinuity when the number of times you are differentiating is equal to t. In other words, the function:
$(t, k) \mapsto \frac{d}{dt} \left(\frac{d^k}{du^k}\left({}^{t}(e^u)\right)|_{u=0}\right)$
is continuous and differentiable for all $t \ne k$. This can only be seen if you already have a continuous extension of tetration to non-integer heights, if not, then its just a bunch of points. This has been bothering me for quite some time now, but I think it may be related to the iterability of $e^x - 1$. Since tetration and $h^x - 1$ are topologically conjugate, this may help explain this as well...

Andrew Robbins
andydude Wrote:Yes, there are other posts about this, because it is common to slip on this aspect. I think Henryk said it best in this post particularly. I think I have seen something similar in tetration itself as opposed to iter-dec-exp, but I'm not sure how to phrase it.

I first saw it when I was following Galidakis' research with his Puiseux series expansions about log(b). Galidakis uses series about the base b, whereas natural iteration uses series expansions about the iterator t in $\exp_b^t(x)$. With the linear approximation to tetration, as well as higher approximations, the table that Galidakis gives for the Puiseux series expansions of ${}^{t}(b)$ will give a discontinuity when the number of times you are differentiating is equal to t. In other words, the function:
$(t, k) \mapsto \frac{d}{dt} \left(\frac{d^k}{du^k}\left({}^{t}(e^u)\right)|_{u=0}\right)$
is continuous and differentiable for all $t \ne k$. This can only be seen if you already have a continuous extension of tetration to non-integer heights, if not, then its just a bunch of points. This has been bothering me for quite some time now, but I think it may be related to the iterability of $e^x - 1$. Since tetration and $h^x - 1$ are topologically conjugate, this may help explain this as well...

Andrew Robbins

Indeed - I missed that thread (I was some days outside and had no internet access. Also the discussion took place in August, and that was the time, when I was completely absorbed by my Eigensystem-analyses) But now I remember I've read in this thread (and even wrote one reply to Jay... )

Great resource, our forum!

I see two approaches to construct powerseries for iterates f°h(x) of f(x) = b^x-1 , (setting u=log(b))

1) determine the coefficients as polynomials in h (recursive insertion of the basic powerseries of f(x) for x, expand & collect terms, then interpolation, or matrix-logarithm) if u=1, so that we have
f°h(x) = g(x,h)

2) determine the coefficients as polynomials in u^h via eigensystem-decomposition, if u<>1

The question was open, what *is* the rate of growth of coefficients; I've possibly an answer now by the symbolic eigensystem-analyis.

If the powerseries in x is expressed according to 1) we have polynomials in h, like( a_k h^k + b_k h^(k-1) + ... + j_k) * x^k dominated by some a_k * h^k where k is the power of x and also the (numerical) coefficients a_k increase - again with unknown rate.

If it is expressed according to 2) I get, that the coefficients at the powers of x grow with ~ $A_k*u^{kh} / k!$ where again k is the power of x and u^kh is the highest power of u in the coefficient, where also the A_k are growing, but far less than the a_k in the method 1)

Since eigensystem-analysis is impossible, if b=exp(1) and u=log(b)=1 because of singularities by the vanishing denominators this can only be useful in the view of approximation.

Anyway - I don't have much more at the moment; I'll try to describe the 2)-terms soon.

Gottfried
Gottfried Wrote:1) determine the coefficients as polynomials in h ...
2) determine the coefficients as polynomials in u^h ...

Yes, these two approaches have names for them as well. Daniel Geisler calls them:
1. parabolic iteration $(f'(x_0) = 1)$ and
2. hyperbolic iteration $(f'(x_0) \ne 1)$ (which I never made a post about).
Just wanted to hyper-link these posts

Andrew Robbins
andydude Wrote:Just wanted to hyper-link these posts

<G>
I remember posting graphs of the root test of $e^x - 1$ in this thread, but now I wonder if it is any different for other bases. For example, I wonder if the root test is any different for $2^x - 1$ which should be representative of all bases between 1 and e.

Andrew Robbins