# Tetration Forum

Full Version: Uniqueness summary and idea
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2
Let $F(x)=b^x$ for some base $b$.
Then we demand that any tetration $f(x)={}^x b$ is a solution of the Abel equation
$f(x+1)=F(f(x))$ and $f(1)=b$.

Such a solution $f$ (even if analytic and strictly increasing) is generally not unique because for example the solution $g(x):=f(x+\frac{1}{2\pi}\sin(2\pi x))$ is also an analytic strictly increasing solution, by
$g(x+1)=f(x+1+\frac{1}{2\pi}\sin(2\pi + 2\pi x))=F(f(x+\frac{1}{2\pi}\sin(2\pi x))=F(g(x))$ and
$g'(x)=f'(x+\frac{1}{2\pi}\sin(2\pi x))(1+\frac{1}{2\pi}\cos(2\pi x)2\pi)=\underbrace{f'(x+\frac{1}{2\pi}\sin(2\pi x))}_{>0}\underbrace{(1+\cos(2\pi x))}_{\ge 0}>0$

For uniqueness it was Daniel's idea to consider the continuous iteration at a fixed point $x_0$ of $F(x)$. The continuous iteration of $F$ is derived from $f$ by
(0) $F^{\circ t}(x)=f(f^{-1}(x)+t)$

For such an iteration to be unique it suffices to demand the existence of the limit
(1) $\lim_{x\to x0}\frac{F^{\circ t}(x)-x_0}{x-x_0}$ for the hyperbolic fixed point or
(2) $\lim_{x\to x0}\frac{F^{\circ t}(x)-x}{(x-x_0)^q}$ for the parabolic fixed point (where $q>1$ is the index of the first non-zero coefficient in the development of $F$ at $x_0$).

So (2) is our uniqueness condition for $b=e^{1/e}$ and (1) is our uniqueness condition for $1.

Now I was thinking further that if $f_b(x)={}^xb$ is also analytic in $b$, i.e. $x\mapsto {}^t x=\exp_x^{\circ t}(1)$ is an analytic function on $(1,e^{1/e})$ (which has to be proved but is quite reasonable) then this function can be analytically extended to $x\ge e^{1/e}$ if there is no singularity at $x=e^{1/e}$.

So this would mean we had a unique analytic (in $x$ and $y$) tetration ${}^y x$ under the condition (1) and (2), where $F$ is defined by (0).

Note, that we dont need a converging development of $F^{\circ t}(x)$ at the fixed point $x_0$ (which for $b=e^{1/e}$, $x_0=e$, is equivalent to the existence of a converging development of $e^x-1$ at 0). The analytic function $F^{\circ t}(x)$ is uniquely determined at least for $1 and this suffices for our $F^{\circ t}(1)$.
Nice, I'm definitely learning about this uniqueness/convergence/analiciy thing. I guess the series expansions of a flow based on fixed-points does not need to converge for the flow to be analytic elsewhere. If this is true, and an analytic iterate can be found by some other means, then can we perhaps derive an accuracy function?

An accuracy function would be a function E(n) associated with the flow of a particular function f(x) such that:

$E(n) > \left| f^{[t]}(x) - \sum_{k=1}^{n}f_k(t) x^k \right|$
for all |t|<1, |x|<1 or something.

This would allow us to use the series to compute the flow of a function instead of whatever other methods we might use to compute the more "accurate" version. We could use this even when the series for the flow does not converge (which appears likely for many functions). Since it is stated in terms of a function of t instead of a polynomial of t, this method should work for hyperbolic iteration series as well as parabolic iteration series.

Andrew Robbins
andydude Wrote:Nice, I'm definitely learning about this uniqueness/convergence/analiciy thing. I guess the series expansions of a flow based on fixed-points does not need to converge for the flow to be analytic elsewhere.

Exactly. A well-known example is $\sqrt{x}$. It is an anlytic function on $\mathbb{R}_{>0}$ and has a fixed point at 0, but there is no series expansion in this fixed point. However it has the reason that the first derivate is infinity at 0.
Now there are other on $\mathbb{R}_{>0}$ analytic functions such that all derivatives converge at 0, so we could make a series expansion, but that would have convergence radius 0. Taking this as step further (in analogy the step from $C^\infty$ to analyticity), there are asymptotically developable function, that still have convergence radius 0, at 0.
And such a case is $(e^x-1)^{[t]}$, the series does not converge (i.e. has convergence radius 0) in 0 but for $x>0$ it is analytic.

Quote: If this is true, and an analytic iterate can be found by some other means, then can we perhaps derive an accuracy function?

These are two methods.
1. Practical mathematicians have developed a quite interesting theory how we can despite successfully use non-converging power series. However I am not familiar with theory, but the main point is that most non-converging series have a certain index k at which they are quite near the actual value of the function. If one can determine this k, one has an ultrafast approximation for the function.
Quote:An accuracy function would be a function E(n) associated with the flow of a particular function f(x) such that:

The accuracy function is probably a means to determine this k. (However I dont know whether such a function is known for $(e^x-1)^{[t]}$, probably not.)

2. Yes, the function can be found by other means, as is written in Szekeres and in more generality in Ecalle. Mainly as limit of a sequence of functions. However the computations involved are quite complicated and I can not realiably describe them in the moment. (I have also the excuse of never having learned French )

Quote:This would allow us to use the series to compute the flow of a function instead of whatever other methods we might use to compute the more "accurate" version. We could use this even when the series for the flow does not converge (which appears likely for many functions).
Yes.

Quote: Since it is stated in terms of a function of t instead of a polynomial of t, this method should work for hyperbolic iteration series as well as parabolic iteration series.

1. AFAIK the hyperbolic iterate has always cr > 0 (of course for base function with cr>0). 2. That cr=0 for the iterate can only happen for the parabolic case, i.e. for that case we need the other approximation formula or the error function for the non-convergent series. 3. as function in t the iterates of $e^x-1$ are entire, but as function in x at 0 they have cr=0 for non-integer t.
bo198214 Wrote:These are two methods.
1. Practical mathematicians have developed a quite interesting theory how we can despite successfully use non-converging power series. However I am not familiar with theory, but the main point is that most non-converging series have a certain index k at which they are quite near the actual value of the function. If one can determine this k, one has an ultrafast approximation for the function.
Hope, I've the right focus: in chap 14 of his monography on divergent series, Konrad Knopp describes this problem (p 537 ff)
http://dz-srv1.sub.uni-goettingen.de/sub...id=D264077

It is in german language, though ...

Gottfried
Yes, the keyword is asymptotic expansion.
The topic "Asymptotic Expansion" has the focus on truncating divergent series to obtain a good approximation. While the topic "Asymptotic Development" generally refers to what we can do with analytic functions that have a powerseries development with convergence radius 0 at a boundary point.
bo198214 Wrote:
Quote: If this is true, and an analytic iterate can be found by some other means, then can we perhaps derive an accuracy function?

These are two methods.
1. Practical mathematicians have developed a quite interesting theory how we can despite successfully use non-converging power series. However I am not familiar with theory, but the main point is that most non-converging series have a certain index k at which they are quite near the actual value of the function. If one can determine this k, one has an ultrafast approximation for the function.
I already alluded to this, but I'm apparently not very good at expressing myself. The index k depends on how close to the fixed point you are. As you get closer and closer to the fixed point, the index k increases without an upper bound. This can be seen by looking at the graph of the root tests that Andrew posted. Assuming the graph of the root tests continues linearly, or even exponentially, so long as it doesn't ever reach infinity, there is a radius for which the series initally converges, for all k. We need only prove that the root test never goes to infinity, and that doesn't seem like such a tall order, given how the coefficients are defined.

An exact solution can be found using a limit, with the radius going to 0, the index k going to infinity (as a function of the radius), and then using integer iteration counts (which we know are convergent) to analytically extend the radius back out to infinity. In a theoretic sense, as the limits are taken to their respective ends (0 and infinity), the solution is exact. From a computational/practical standpoint, you can find arbitrary precision with finite index k and a relative large radius (e.g., 0.001). If you can find the function that gives the correct index k for a given radius, you can explicitly compute what radius and what k are necessary to achieve a desired degree of precision.
Yes this may be. However the difference between a converging series and a nonconverging series is that you can compute $f(x)$ for any x in the convergence disc up to arbitrary precision. While with the nonconverging series merely up to a certain precision (depending on the radius).
Same effect, however, so there's no practical difference, simply different implementation methods. For the converging series, to get some level of precision, you need to compute the first m terms of the series, where m dependings on the precision desired and the input x. For the non-converging series, to get some level of precision, you need to compute the first k terms within some radius, then use j integer iterations using i_j terms to get back to the desired input. In either case, a given level of precision at a given input requires a determinable number of calculations, and exact precision in either case requires an infinite number of calculations (unless the power series is finite).
jaydfox Wrote:then use j integer iterations using i_j terms to get back to the desired input.
What do you mean by this?
bo198214 Wrote:
jaydfox Wrote:then use j integer iterations using i_j terms to get back to the desired input.
What do you mean by this?

I suppose it depends on how you plan to do the fraction iteration.

For example:

$\text{Given: }f(z)=e^z-1\\
\vspace{10} \\
f^{\circ \small 0.3}(z)\ =\ f^{\circ \small 20.3}\left(f^{\circ \small -20}(z)\right)
$

In that case, the statement about using j integer iterations would not apply.

On the other hand, if you attempted to solve:
$
f^{\circ \small 0.3}(z)\ =\ f^{\circ \small 20}\left(f^{\circ \small 0.3}\left(f^{\circ \small -20}(z)\right)\right)
$

then you would use one integer iteration (with the integer being 20). And if you attempted to solve:

$
f^{\circ \small 0.3}(z)\ =\ f^{\circ \small 4}\left(f^{\circ \small 16}\left(f^{\circ \small 0.3}\left(f^{\circ \small -20}(z)\right)\right)\right)
$

then you would use two integer iterations (16 and 4, in that order). The same thing could be said about the negative iterations for the innermost term. It depends on whether you use the IDM to reduce several integer iterations to a single power series, or whether you actually perform the power series for a single iteration, an integer number of times.

Just depends on how you think it's best to implement. In the theoretic sense, I suppose the first version, using no supplemental integer iterations, is best. I'm not sure which I'll use long term, and they're equivalent, so I left the door open by including the additional integer iterations to step the number back out.
Pages: 1 2