# Tetration Forum

Full Version: Continuous iteration
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2
After writing my first paper on defining complex tetration in 1990, I realized that my technique had nothing to do with tetration and could be extended to continuously iterating differentiable functions. Two techniques were published in the mid nineties, one using Bell matrices and the other using another matrix technique, the Carleman linearization technique. So there are three different techniques for continuously iterating functions from the nineties. Their main limitation is that they require a hyperbolic fixed point. The other types of fixed points are exceptional cases. Since two of these techiques are published, the most common type of complex tetration and the complex Ackermann function follow naturally.
Daniel Wrote:Two techniques were published in the mid nineties, one using Bell matrices and the other using another matrix technique, the Carleman linearization technique. So there are three different techniques for continuously iterating functions from the nineties.

What do you mean by techniques?
Uniqueness and existence for solutions in the hyperbolic case are quite old: publications reach back to the beginning of the 20th century while the most known results including the parabolic case were found in the 60s. This means for most types of differentiable functions with parabolic or hyperbolic fixed point there is "the" (i.e. the regular) solution of continuous iteration.
The equivalence of powerseries composition with bell matrix multiplication and hence the derivation of continuous iteration via matrix exponentiation was already known to Jabotinsky 1961 (especially for the parabolic case, where you have a non-limit formula for the coefficients as opposed to the hyperbolic case).

Quote:Their main limitation is that they require a hyperbolic fixed point.

For me the limitation is that they require a fixed point at all.
Thatswhy it is so interesting to consider the real iteration of $e^x$ or tetration for bases greater than $e^{1/e}$ because it has no fixed point, and we know from Kneser (1950) that the development at a complex fixed point yields complex values for real arguments. Is there anyway a proof that developing at different complex fixed points yields always the same continuous iteration?
bo198214 Wrote:
Daniel Wrote:Two techniques were published in the mid nineties, one using Bell matrices and the other using another matrix technique, the Carleman linearization technique. So there are three different techniques for continuously iterating functions from the nineties.

What do you mean by techniques?
Uniqueness and existence for solutions in the hyperbolic case are quite old: publications reach back to the beginning of the 20th century while the most known results including the parabolic case were found in the 60s. This means for most types of differentiable functions with parabolic or hyperbolic fixed point there is "the" (i.e. the regular) solution of continuous iteration.
The equivalence of powerseries composition with bell matrix multiplication and hence the derivation of continuous iteration via matrix exponentiation was already known to Jabotinsky 1961 (especially for the parabolic case, where you have a non-limit formula for the coefficients as opposed to the hyperbolic case).
I spoke with Stephen Wolfram in 1986 who assured me that no solution for a continuously iterated function that displayed chaotic behavior was known at the time. I would love to see references to published material. Special cases of the logistics equation were solved in the nineties. Poinare new about heteroclinic tangles, but little was known about chaos in the sixties.

bo198214 Wrote:
Quote:Their main limitation is that they require a hyperbolic fixed point.

For me the limitation is that they require a fixed point at all.
Thatswhy it is so interesting to consider the real iteration of $e^x$ or tetration for bases greater than $e^{1/e}$ because it has no fixed point, and we know from Kneser (1950) that the development at a complex fixed point yields complex values for real arguments. Is there anyway a proof that developing at different complex fixed points yields always the same continuous iteration?
Requiring a fixed point is not much of a requirement. Sure the fixed points may be complex and lead to odd looking solutions, but so what? Cris Moore asked about the compatability of solutions from different fixed points. By using a fractal with low entropy I was able to experimentally show the correct logrithmic spiral of a neighboring fixed point but I lost the Mathematica notebook. Very good question.
Daniel Wrote:I spoke with Stephen Wolfram in 1986 who assured me that no solution for a continuously iterated function that displayed chaotic behavior was known at the time.
What do you mean by "chaotic behaviour"? In our case the functions to be iterated are $b^x$, which are a rather behaving function afaik.

Quote:Requiring a fixed point is not much of a requirement. Sure the fixed points may be complex and lead to odd looking solutions, but so what?
Not exactly odd looking but simply complex for real arguments.
Thats simply not what we want All the basic functions you learn in analysis yield real values for real arguments.

Quote: Cris Moore asked about the compatability of solutions from different fixed points. By using a fractal with low entropy I was able to experimentally show the correct logrithmic spiral of a neighboring fixed point ...
So does that mean the solutions are equal? Id rather guessed that they are different for different fixed points. Did you compute the fixed points and their derivative of $e^x$? Are they all attracting?
bo198214 Wrote:
Daniel Wrote:I spoke with Stephen Wolfram in 1986 who assured me that no solution for a continuously iterated function that displayed chaotic behavior was known at the time.
What do you mean by "chaotic behaviour"? In our case the functions to be iterated are $b^x$, which are a rather behaving function afaik.

Daniel Wrote:Requiring a fixed point is not much of a requirement. Sure the fixed points may be complex and lead to odd looking solutions, but so what?
Not exactly odd looking but simply complex for real arguments.
Thats simply not what we want All the basic functions you learn in analysis yield real values for real arguments.
To quote thr Rolling Stones, "you can't always get what you want". I didn't care for the results myself when I obtained them. Tetration is not like the functions I studied in analysis either. I now believe that these odd solutions are in fact correct, even though they take real values for integerial tetration and are complex valued otherwise.

As to fixed points being a problem, the fixed points determine the dynamics of maps based on the type of fixed point.

bo198214 Wrote:
Daniel Wrote:Cris Moore asked about the compatability of solutions from different fixed points. By using a fractal with low entropy I was able to experimentally show the correct logrithmic spiral of a neighboring fixed point ...
So does that mean the solutions are equal? Id rather guessed that they are different for different fixed points. Did you compute the fixed points and their derivative of $e^x$? Are they all attracting?
I think the first fixed point I used was attracting and the neighoring fixed point was repelling. In general the hyperbolic fixed points are repelling and are found using iterated logs with a given branch.

Numerically the difficultly is that the dynamics are simple at the two fixed points but are complicated in the chatotic area between them. I used something like $1.1^x$ so as to mimimize the chaotic areas of the Julia set between two fixed points. This doesn't prove that the solutions are equal, it just provided numerial evidence that they are equal.
Daniel Wrote:To quote thr Rolling Stones, "you can't always get what you want".

Not always, however in this case you can see on the basis of our discussions that we are quite close to "the" real solution. It even seems that this solution concides with the solution obtained from regular iteration at the fixed points for the basis $b. However lots of proofs are still lacking.

Quote:I think the first fixed point I used was attracting and the neighoring fixed point was repelling. In general the hyperbolic fixed points are repelling and are found using iterated logs with a given branch.

For each fixed point the conjugate is also a fixed point, I would imagine that such a pair yields the same solution, however different from the solutions obtained at other fixed point pairs.
Daniel Wrote:
bo198214 Wrote:
Daniel Wrote:Cris Moore asked about the compatability of solutions from different fixed points. By using a fractal with low entropy I was able to experimentally show the correct logrithmic spiral of a neighboring fixed point ...
So does that mean the solutions are equal? Id rather guessed that they are different for different fixed points. Did you compute the fixed points and their derivative of $e^x$? Are they all attracting?
I think the first fixed point I used was attracting and the neighoring fixed point was repelling. In general the hyperbolic fixed points are repelling and are found using iterated logs with a given branch.

Numerically the difficultly is that the dynamics are simple at the two fixed points but are complicated in the chatotic area between them. I used something like $1.1^x$ so as to mimimize the chaotic areas of the Julia set between two fixed points. This doesn't prove that the solutions are equal, it just provided numerial evidence that they are equal.
I stumbled upon this while studying fixed points for bases less than 1. I didn't realize that I was observing the same fixed points as you.

If we take the natural logarithm of the real interval (0, 1), we get a curve with endpoints $(-\infty, 0)$, and the next iteration has endpoints $(+\infty+\pi i, -\infty + \pi i)$. Notice that the modulus of each endpoint is infinity. Therefore, all iterates thereafter will have endpoints with positive infinity for the real part, and (assuming the principal branch) imaginary part between 0 and pi.

Each successive iteration of the natural logarithm of the real interval (0,1) will drill deeper towards the fixed point, but because the endpoints are at infinity, each successive iterate will have to snake its way between outer iterates to get out. The curves are non-intersecting.

Therefore there is no direct path between iterates that wouldn't cut through an infinite number of such curves. There would be singularities where the nested curves bunch up. Therefore, the chaos will completely overwhelm any attempt at a solution that tries to iterate naively in this manner.

Continuous iteration would therefore have to stick to real values for real iteration counts (for the unit interval), to avoid these singularities. I don't see another way to avoid the singularities. This means that continuous iteration near the fixed point would have to follow a particular curve. For those reading this, you'd have to see what these curves look like to get my meaning here.

I've only been studying base e, so it may be possible for other bases greater than eta to avoid these singularities, but I kind of doubt it.
jaydfox Wrote:Therefore there is no direct path between iterates that wouldn't cut through an infinite number of such curves. There would be singularities where the nested curves bunch up. Therefore, the chaos will completely overwhelm any attempt at a solution that tries to iterate naively in this manner.
Hmm, I think I may need to retract that claim about the singularities. While we would have to cut through an infinite number of deeper iterates, they wouldn't necessarily be far off the mark. The nesting behavior has to do with logarithms of numbers with very large real parts and small non-zero imaginary parts, where the logarithm would end up with a moderate-sized real part and an imaginary part that is extremely close to zero. The deeper into the nesting we go, the closer to zero these imaginary parts become, so they should act convergently towards real numbers. Therefore, it's not the nested iterates of the logarithm of real numbers we need to worry about, it's all the space in between. If those regions behave nicely as well, then perhaps the chaos will not be a problem. The question remains for me: what does the solution look like, then?

I'll try to have an answer in the next few days, depending on my work schedule.
jaydfox Wrote:
jaydfox Wrote:Therefore there is no direct path between iterates that wouldn't cut through an infinite number of such curves. There would be singularities where the nested curves bunch up. Therefore, the chaos will completely overwhelm any attempt at a solution that tries to iterate naively in this manner.
Hmm, I think I may need to retract that claim about the singularities. While we would have to cut through an infinite number of deeper iterates, they wouldn't necessarily be far off the mark. The nesting behavior has to do with logarithms of numbers with very large real parts and small non-zero imaginary parts, where the logarithm would end up with a moderate-sized real part and an imaginary part that is extremely close to zero. The deeper into the nesting we go, the closer to zero these imaginary parts become, so they should act convergently towards real numbers. Therefore, it's not the nested iterates of the logarithm of real numbers we need to worry about, it's all the space in between. If those regions behave nicely as well, then perhaps the chaos will not be a problem. The question remains for me: what does the solution look like, then?

I'll try to have an answer in the next few days, depending on my work schedule.

I was partially right. Because of the fractal nature of the iterated logarithms of the real interval (0, 1), there isn't a "correct" choice for where to start the logarithmic spiral around the fixed point. (Actually, there does seem to be one point which is tangent to a logarithmic spiral: 0.4890195613601270345796506844621354...)

If you choose a number close to 0, especially one very close to 0, then indeed the singularities I described do pop up. In fact, I tried using $\exp\left(\exp({}^{3} e + \pi i)\right)$, i.e., $e^{-({}^{4} e)}$, and the singularities just smeared the results over most of the interval, giving numbers too large for SAGE to handle, several iterations short of exponentiating my way back out from the fixed point.

However, for numbers close to 0.5, I get a very smooth complex curve. Different starting points give different curves. So the method does seem to have some validity, but it typically only returns two real numbers per unit interval (only 1 real number in the case of the 0.48901956... value).

I'll provide graphs when I can, if anyone's interested anyway.
Daniel Wrote:I spoke with Stephen Wolfram in 1986 who assured me that no solution for a continuously iterated function that displayed chaotic behavior was known at the time. I would love to see references to published material.

I think this is the oldest reference to continuously iterating functions:

G. Koenigs, Recherches sur les intégrales de certaines équations fonctionnelles, Annales sci. de l'École Normale Supérieure (3) 1 (1884), Supp. 3-41.

Koenigs showed in 1884 that if we have a power series
$f(z)=\sum_{n=1}^\infty a_n z^n$ with $0<|a_1|<1$ which is convergent for $|z| then the function
$\chi(z)=\lim_{n\to\infty} a_1^{-n} f^{\circ n}(z)$ exists and is analytic in a suitable neighborhood of 0 and is a solution of the Schroeder equation $\chi(f(z))=a_1 \chi(z)$.

Given this we easily derive continuous iterates by
$f^{\circ t}(z)=\chi^{-1}(a_1^t \chi(z))$.

So i dont know what you mean by a "continuously iterated function that displayed chaotic behavior" but surely methods for continuously iterating analytic functions with hyperbolic fixed point were known already around 1884.
Pages: 1 2