09/20/2007, 01:22 AM

There are two subjects which have caused me a great deal of confusion, so I will try and make sense of them to myself, and if I get anything wrong, hopefully someone will correct me of my error. The subjects are Classification of fixed points / periodic points / cycles, and the Lyapunov characteristic / number / exponent. Since both of these subjects are properties of self-maps, they will vary with the base of the exponential in question. The reason why I called this Intervals of Tetration is because of this dependance on the intervals which the base is in.

Classification of fixed points

For the classification of fixed points, I could only find 3 good sources:

One of the confusing aspects of these terms is that hyperbolic and parabolic are only used by Geisler, and as such, it is hard to say if he uses them as a synonyms for attracting fixed point and rationally neutral fixed point respectively, or if he means only the bases above one. Perhaps Daniel Geisler can shed some light on this.

The advantage of knowing fixed points is in the methods that depend on fixed points, right now this is Koch/Bell/Carleman matrices, parabolic iteration, and hyperbolic iteration. These methods are useless if you don't know what the fixed points are.

Lyapunov numbers and exponents

I have had the toughest time finding a good definition of the Lyapunov number, because most sources actually give the definition of the Lyapunov exponent, and refer to it as the Lyapunov characteristic, making it even more confusing.

For the Lyapunov number of a map f(x), I will use the notation . The definition that I found the easiest to understand (although technically incorrect for non-exponential functions) is as follows:

and . Part of the confusion was in seeing different notations for similar things like (lim, sup, max, etc.) and (', D, Jacobian, etc.) for derivatives, and sometimes there is a product of derivatives of each iterate, rather than a derivative of one iterate, which I think is easier to read.

The advantage of knowing the Lyapunov characteristic number is that it is a measure of how well-behaved iterations of a function are. And as you can see, bases above are not even finite! So I guess this means that these bases are the worst-behaved in terms of convergence to a fixed point, which makes sense because there is no real fixed point for these bases.

Methods of Tetration

Of all of these, I think the methods that are most "natural" are: parabolic iteration, hyperbolic iteration, fractional iteration, and the two Abel FE solutions. However, I think the one with the most potential is Nelson's hyper-logarithms. Nelson's hyper-logarithms are defined almost like a continued fraction, but since it's not exactly a continued fraction, its difficult to analyze, but I think that if it were defined slightly differently, it is possible to make them continuous, and possibly correspond to other methods as well.

Andrew Robbins

Classification of fixed points

For the classification of fixed points, I could only find 3 good sources:

- A. A. Bennett, The Iteration of Functions of one Variable.

- Robert L. Devaney, Complex Dynamical Systems: The Mathematics Behind the Mandelbrot and Julia Sets.

- Daniel Geisler, http://tetration.org.

- -- (attracting 3-cycle: )

- -- unknown

- -- (attracting 3-cycle)

- -- unknown

- -- (attracting 2-cycle)

- -- no fixed points (super-attracting 2-cycle: {0,1})

- -- (real) repelling fixed point (attracting 2-cycle)

- -- rationally neutral fixed point (neutral 1-cycle)

- -- attracting fixed-point (attracting 1-cycle)

- -- super-attracting fixed point (super-attracting 1-cycle)

- -- hyperbolic, attracting fixed point (attracting 1-cycle)

- -- parabolic, rationally neutral fixed point (neutral 1-cycle)

- -- elliptic (complex) repelling fixed point (repelling 1-cycle)

One of the confusing aspects of these terms is that hyperbolic and parabolic are only used by Geisler, and as such, it is hard to say if he uses them as a synonyms for attracting fixed point and rationally neutral fixed point respectively, or if he means only the bases above one. Perhaps Daniel Geisler can shed some light on this.

The advantage of knowing fixed points is in the methods that depend on fixed points, right now this is Koch/Bell/Carleman matrices, parabolic iteration, and hyperbolic iteration. These methods are useless if you don't know what the fixed points are.

Lyapunov numbers and exponents

I have had the toughest time finding a good definition of the Lyapunov number, because most sources actually give the definition of the Lyapunov exponent, and refer to it as the Lyapunov characteristic, making it even more confusing.

For the Lyapunov number of a map f(x), I will use the notation . The definition that I found the easiest to understand (although technically incorrect for non-exponential functions) is as follows:

and . Part of the confusion was in seeing different notations for similar things like (lim, sup, max, etc.) and (', D, Jacobian, etc.) for derivatives, and sometimes there is a product of derivatives of each iterate, rather than a derivative of one iterate, which I think is easier to read.

- for

- for

- for

- for

The advantage of knowing the Lyapunov characteristic number is that it is a measure of how well-behaved iterations of a function are. And as you can see, bases above are not even finite! So I guess this means that these bases are the worst-behaved in terms of convergence to a fixed point, which makes sense because there is no real fixed point for these bases.

Methods of Tetration

- Koch/Bell/Carleman matrices (includes parabolic and hyperbolic iteration) -- works for (although it works outside this range, it produces complex values).

- Daniel Geisler's parabolic tetration series -- first 3 derivatives of parabolic iteration.

- Daniel Geisler's hyperbolic tetration series -- first 3 derivatives of hyperbolic iteration.

- Eri Jabotinsky's double-binomial expansion -- a simplification of parabolic iteration.

- Helms' exp(t log(M)) method -- should work in the same interval.

- Daniel Geisler's parabolic tetration series -- first 3 derivatives of parabolic iteration.
- Iteration-based solution of Abel FE (Peter Walker's)-- only given for .

- Matrix-based solution of Abel FE (Andrew Robbins')-- works for (although it converges faster for ).

- S.C.Woon's series (w=1) -- quickly converges for (but straight line), may converge for , diverges for .

- Ioannis Galidakis' solution -- does not satisfy for all x (only for integer x).

- Cliff Nelson's hyper-logarithms -- not continuous, but defines all hyper-(n+1)-logarithms in terms of hyper-n-logarithms.

- Robert Munafo's solution -- seems C^n, but defined by a nested exponential, so hard to determine analycity.

- Jay Fox's change-of-base -- theoretically speaking, should work for all .

- Ingolf Dahl's solution -- based on fractional iteration (interval unknown).

Of all of these, I think the methods that are most "natural" are: parabolic iteration, hyperbolic iteration, fractional iteration, and the two Abel FE solutions. However, I think the one with the most potential is Nelson's hyper-logarithms. Nelson's hyper-logarithms are defined almost like a continued fraction, but since it's not exactly a continued fraction, its difficult to analyze, but I think that if it were defined slightly differently, it is possible to make them continuous, and possibly correspond to other methods as well.

Andrew Robbins