Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Intervals of Tetration
#1
There are two subjects which have caused me a great deal of confusion, so I will try and make sense of them to myself, and if I get anything wrong, hopefully someone will correct me of my error. The subjects are Classification of fixed points / periodic points / cycles, and the Lyapunov characteristic / number / exponent. Since both of these subjects are properties of self-maps, they will vary with the base of the exponential in question. The reason why I called this Intervals of Tetration is because of this dependance on the intervals which the base is in.

Classification of fixed points

For the classification of fixed points, I could only find 3 good sources:
  • A. A. Bennett, The Iteration of Functions of one Variable.
  • Robert L. Devaney, Complex Dynamical Systems: The Mathematics Behind the Mandelbrot and Julia Sets.
  • Daniel Geisler, http://tetration.org.
Bennett doesn't use names (he uses Type II, etc.), and Devaney's book (he uses attracting, repelling, etc.) doesn't mention all of Geisler's names (he uses parabolic, hyperbolic), so this is a list of fixed points I've been able to piece together from the sources that I've been able to get ahold of. I will be using b as in .
  • -- (attracting 3-cycle: )
  • -- unknown
  • -- (attracting 3-cycle)
  • -- unknown
  • -- (attracting 2-cycle)
  • -- no fixed points (super-attracting 2-cycle: {0,1})
  • -- (real) repelling fixed point (attracting 2-cycle)
  • -- rationally neutral fixed point (neutral 1-cycle)
  • -- attracting fixed-point (attracting 1-cycle)
  • -- super-attracting fixed point (super-attracting 1-cycle)
  • -- hyperbolic, attracting fixed point (attracting 1-cycle)
  • -- parabolic, rationally neutral fixed point (neutral 1-cycle)
  • -- elliptic (complex) repelling fixed point (repelling 1-cycle)
where , , , and .

One of the confusing aspects of these terms is that hyperbolic and parabolic are only used by Geisler, and as such, it is hard to say if he uses them as a synonyms for attracting fixed point and rationally neutral fixed point respectively, or if he means only the bases above one. Perhaps Daniel Geisler can shed some light on this.

The advantage of knowing fixed points is in the methods that depend on fixed points, right now this is Koch/Bell/Carleman matrices, parabolic iteration, and hyperbolic iteration. These methods are useless if you don't know what the fixed points are.

Lyapunov numbers and exponents

I have had the toughest time finding a good definition of the Lyapunov number, because most sources actually give the definition of the Lyapunov exponent, and refer to it as the Lyapunov characteristic, making it even more confusing.

For the Lyapunov number of a map f(x), I will use the notation . The definition that I found the easiest to understand (although technically incorrect for non-exponential functions) is as follows:
and . Part of the confusion was in seeing different notations for similar things like (lim, sup, max, etc.) and (', D, Jacobian, etc.) for derivatives, and sometimes there is a product of derivatives of each iterate, rather than a derivative of one iterate, which I think is easier to read.

  • for

  • for

  • for

  • for

The advantage of knowing the Lyapunov characteristic number is that it is a measure of how well-behaved iterations of a function are. And as you can see, bases above are not even finite! So I guess this means that these bases are the worst-behaved in terms of convergence to a fixed point, which makes sense because there is no real fixed point for these bases.

Methods of Tetration
  • Koch/Bell/Carleman matrices (includes parabolic and hyperbolic iteration) -- works for (although it works outside this range, it produces complex values).
    • Daniel Geisler's parabolic tetration series -- first 3 derivatives of parabolic iteration.
    • Daniel Geisler's hyperbolic tetration series -- first 3 derivatives of hyperbolic iteration.
    • Eri Jabotinsky's double-binomial expansion -- a simplification of parabolic iteration.
    • Helms' exp(t log(M)) method -- should work in the same interval.
  • Iteration-based solution of Abel FE (Peter Walker's)-- only given for .
  • Matrix-based solution of Abel FE (Andrew Robbins')-- works for (although it converges faster for ).
  • S.C.Woon's series (w=1) -- quickly converges for (but straight line), may converge for , diverges for .
  • Ioannis Galidakis' solution -- does not satisfy for all x (only for integer x).
  • Cliff Nelson's hyper-logarithms -- not continuous, but defines all hyper-(n+1)-logarithms in terms of hyper-n-logarithms.
  • Robert Munafo's solution -- seems C^n, but defined by a nested exponential, so hard to determine analycity.
  • Jay Fox's change-of-base -- theoretically speaking, should work for all .
  • Ingolf Dahl's solution -- based on fractional iteration (interval unknown).

Of all of these, I think the methods that are most "natural" are: parabolic iteration, hyperbolic iteration, fractional iteration, and the two Abel FE solutions. However, I think the one with the most potential is Nelson's hyper-logarithms. Nelson's hyper-logarithms are defined almost like a continued fraction, but since it's not exactly a continued fraction, its difficult to analyze, but I think that if it were defined slightly differently, it is possible to make them continuous, and possibly correspond to other methods as well.

Andrew Robbins
Reply
#2
andydude Wrote:Methods of Tetration
  • Koch/Bell/Carleman matrices (includes parabolic and hyperbolic iteration) -- works for (although it works outside this range, it produces complex values).
    • Daniel Geisler's parabolic tetration series -- first 3 derivatives of parabolic iteration.
    • Daniel Geisler's hyperbolic tetration series -- first 3 derivatives of hyperbolic iteration.
    • Eri Jabotinsky's double-binomial expansion -- a simplification of parabolic iteration.
    • Helms' exp(t log(M)) method -- should work in the same interval.
  • Iteration-based solution of Abel FE (Peter Walker's)-- only given for .
  • Matrix-based solution of Abel FE (Andrew Robbins')-- works for (although it converges faster for ).
  • S.C.Woon's series (w=1) -- quickly converges for (but straight line), may converge for , diverges for .
  • Ioannis Galidakis' solution -- does not satisfy for all x (only for integer x).
  • Cliff Nelson's hyper-logarithms -- not continuous, but defines all hyper-(n+1)-logarithms in terms of hyper-n-logarithms.
  • Robert Munafo's solution -- seems C^n, but defined by a nested exponential, so hard to determine analycity.
  • Jay Fox's change-of-base -- theoretically speaking, should work for all .
  • Ingolf Dahl's solution -- based on fractional iteration (interval unknown).

Don't forget Peter Walker's solution in his paper Infinitely Differentiable Generalized Logarithmic and Exponential Functions.

I've gone back and played with his solution, and I think it gives the same results as mine, at least for converting from base eta to base e. My change of base formula is essentially the same as his "h" function (though his "h" function is specific for converting between base e and base eta, using a double-logarithmic scale), and his "g" function is a painfully slow way of calculating the parabolic continuous iteration of e^z-1, which is the equivalent of the double-logarithm of my cheta function.

My change of base formula requires a proper superexponential function, which my cheta function is. The parabolic continuous iteration of the decremented natural logarithm is not. However, I work in double-logarithmic math to use the change of base formula, and the double logarithm of my cheta function is the parabolic iteration of e^z-1. So both our solutions use the same underlying math, but calculate the results in different ways.

I've already noted that my solution and Andrew's are different, and Peter himself noted that his (h/g) solution appears to be different from some "matrix method", which is likely similar to or the same as Andrew's slog.
~ Jay Daniel Fox
Reply
#3
jaydfox Wrote:
andydude Wrote:Iteration-based solution of Abel FE (Peter Walker's)-- only given for .

I didn't forget it Smile

Andrew Robbins
Reply
#4
andydude Wrote:
jaydfox Wrote:
andydude Wrote:Iteration-based solution of Abel FE (Peter Walker's)-- only given for .

I didn't forget it Smile

Andrew Robbins

Sorry, guess I was skimming and stopped at "Abel". Rolleyes
~ Jay Daniel Fox
Reply
#5
jaydfox Wrote:I've already noted that my solution and Andrew's are different, and Peter himself noted that his (h/g) solution appears to be different from some "matrix method", which is likely similar to or the same as Andrew's slog.

Jay,

triggered by some discussion in sci.math-newsgroup I updated my "operators (update 4b)" article, which adresses the problems of (and with) the matrix-method in some more details than the draft before. Maybe that considerations are not helpful (or even misleading) in the current discussion due to some problems of mine of understanding, but possibly it clarifies the function of the matrix-method as proposed by me.

Gottfried
Gottfried Helms, Kassel
Reply
#6
andydude Wrote:There are two subjects which have caused me a great deal of confusion, so I will try and make sense of them to myself, and if I get anything wrong, hopefully someone will correct me of my error. The subjects are Classification of fixed points / periodic points / cycles, and the Lyapunov characteristic / number / exponent. Since both of these subjects are properties of self-maps, they will vary with the base of the exponential in question.

I recommend A History of Complex Dynamics, From Schroder to Fatou and Julia by Daniel S. Alexander and Complex Dynamics by Lennart Carleson and Theodore W. Gamelin for an explaination of the classification of fixed points.

The Lyapunov characteristic is the most important concept here. There are linearization theorems in both complex dynamics and higher dimensional dynamics that show that the Lyapunov characteristic number or multiplier controls the dynamics of the map. Historically it found that certain cases of dynamical systems obeyed functional equations, but that understanding all cases of a given dynamical system like tetration involves understanding several different functional equations. Further more, the dynamics of these cases had more in common with the cases of other functions obeying the same functional equation than the different cases of the same function. The classification of fixed points is about proving that certain Lyapunov characteristic numbers are associated to certain functional equations.

The Lyapunov characteristic number is the first derivative of the function being mapped at the function's fixed point. Let , then . The Lyapunov characteristic numbers are also known as the multipliers because is the multiplier in Schoeder's functional equation . The Lyapunov exponent is just the log of the Lyapunov characteristic number and is handy because if it's real value is negative then the fixed point is an attractor and if the real value is positive then the fixed point is a repellor. The nicely generalizes to matrices. If the Lyapunov exponent is a positive semidefinite matrix, then the fixed point is a repellor. A number of approaches to extending the definition of tetration are attempts to linearize exponential dynamics.

The Lyapunov characteristic number in tetration is just the log of the fixed point therefore is the location of rationally neutral fixed points for rational values of while has rationally neutral fixed points for rational values of .

Most fixed points are hyperbolic fixed points; superattracting, rationally neutral and parabolic rationally neutral fixed points are exceptions. I haven't seen the term elliptic fixed point widely used in dynamics, instead the terms rationally neutral and irrationally neutral are used. The fact that the exponential function is periodic results in having a countably infinite number of fixed points for a specific .

As to the classification of intervals in tetration, in general the different intervals may have different behaviors when exponentially iterated, but they have fixed points in the complex plane under logarithmic iteration. For , iteration of cycles through two values, but iteration of has a fixed point. The map of does have fixed points, an infinite number of hyperbolic repelling fixed points in the complex plane.
Reply
#7
andydude Wrote:Methods of Tetration
  • Koch/Bell/Carleman matrices (includes parabolic and hyperbolic iteration) -- works for (although it works outside this range, it produces complex values).
    • Daniel Geisler's parabolic tetration series -- first 3 derivatives of parabolic iteration.
    • Daniel Geisler's hyperbolic tetration series -- first 3 derivatives of hyperbolic iteration.
    • Eri Jabotinsky's double-binomial expansion -- a simplification of parabolic iteration.
    • Helms' exp(t log(M)) method -- should work in the same interval.
  • Iteration-based solution of Abel FE (Peter Walker's)-- only given for .
  • Matrix-based solution of Abel FE (Andrew Robbins')-- works for (although it converges faster for ).
  • S.C.Woon's series (w=1) -- quickly converges for (but straight line), may converge for , diverges for .
  • Ioannis Galidakis' solution -- does not satisfy for all x (only for integer x).
  • Cliff Nelson's hyper-logarithms -- not continuous, but defines all hyper-(n+1)-logarithms in terms of hyper-n-logarithms.
  • Robert Munafo's solution -- seems C^n, but defined by a nested exponential, so hard to determine analycity.
  • Jay Fox's change-of-base -- theoretically speaking, should work for all .
  • Ingolf Dahl's solution -- based on fractional iteration (interval unknown).

This is a nice list of different approaches to tetration. While I would like to extend my work to matrices, it currently doesn't involve matrices althought I did find that I obtained results experimentally that were consistent with Bell matrices. I use Faa di Bruno's formula for , set and solve. My Mathematica software has computed the first 8 derivatives of . This then simplifies for the cases of parabolic and hyperbolic iteration where I have only listed the first three derivatives for the sake of brevity.

Daniel Geisler
Reply
#8
Daniel Wrote:
andydude Wrote:Methods of Tetration
  • Koch/Bell/Carleman matrices (includes parabolic and hyperbolic iteration) -- works for (although it works outside this range, it produces complex values).
    • Daniel Geisler's parabolic tetration series -- first 3 derivatives of parabolic iteration.
    • Daniel Geisler's hyperbolic tetration series -- first 3 derivatives of hyperbolic iteration.
    • Eri Jabotinsky's double-binomial expansion -- a simplification of parabolic iteration.
    • Helms' exp(t log(M)) method -- should work in the same interval.
  • Iteration-based solution of Abel FE (Peter Walker's)-- only given for .
  • Matrix-based solution of Abel FE (Andrew Robbins')-- works for (although it converges faster for ).
  • S.C.Woon's series (w=1) -- quickly converges for (but straight line), may converge for , diverges for .
  • Ioannis Galidakis' solution -- does not satisfy for all x (only for integer x).
  • Cliff Nelson's hyper-logarithms -- not continuous, but defines all hyper-(n+1)-logarithms in terms of hyper-n-logarithms.
  • Robert Munafo's solution -- seems C^n, but defined by a nested exponential, so hard to determine analycity.
  • Jay Fox's change-of-base -- theoretically speaking, should work for all .
  • Ingolf Dahl's solution -- based on fractional iteration (interval unknown).

This is a nice list of different approaches to tetration.

Though it contains a heap of errors:
  • parabolic iteration works merely for the case . Jabotinsky's double binomial expansion works only in the parabolic case.
  • Helm's method works generally for and yields real values. For it obtains complex values.
  • Woon's series was not designed for functions but for operators. I think it is equal to the parabolic case for and using a function instead of the operator.
  • I think Jay's method works merely for .
  • I think there are two solutions by Ioannis, one that is merely continuous but satisfies and one that is and satisfies the condition only for integer .
Reply
#9
bo198214 Wrote:Though it contains a heap of errors:
  • parabolic iteration works merely for the case . Jabotinsky's double binomial expansion works only in the parabolic case.
  • Helm's method works generally for and yields real values. For it obtains complex values.
  • Woon's series was not designed for functions but for operators. I think it is equal to the parabolic case for and using a function instead of the operator.
  • I think Jay's method works merely for .
  • I think there are two solutions by Ioannis, one that is merely continuous but satisfies and one that is and satisfies the condition only for integer .

One quick observation concerning some of the alternate approaches (like my first solution) considered above: I *think* (but I am not absolutely sure about it) that any method which defines for reasonable , for could conceivably be extended to a , by using Andrew's method. For example, on my method for the first solution I define for . One could follow Andrew's impositions by requiring that . I haven't played around with my first solution to see if this works, but I suspect that it might for . I don't think this approach will work if is linear in [0,1], because higher order derivatives will vanish.
Reply
#10
UVIR Wrote:I *think* (but I am not absolutely sure about it) that any method which defines for reasonable , for could conceivably be extended to a

I dont get this. If I define it on and I demand that then the function is already determined for all there is no place left for further manipulations (i.e. to make it differentiable).

Andrew's approach was to choose it as an analytic function on and he determines the coefficients of the seriesexpansion at 0 by demanding that it is (or even analytic). However this approach works computably only on the , the inverse of .
Reply




Users browsing this thread: 1 Guest(s)