The "cheta" function
#11
(08/06/2009, 10:02 PM)jaydfox Wrote: Hmm, that sparked a memory of a long-forgotten converstation we had:
http://math.eretrandre.org/tetrationforu...ght=entire

It's amazing how much more sense all of that makes now (and clear from my posts where my misunderstandings at the time were, as well as my gaps in understanding complex analysis). And yes, looking at just the descriptions in that old discussion, it would appear that cheta is nothing new.

Yaya, things are also much clearer for me now. As I already suggested the book of Milnor deals exactly with (the dynamics of) that cases, a must read.

E.g. non-parabolic fixed points \( |f'(0)|\neq 0,1 \) have always a neighborhood that is either attractive |f'(0)|<1 or repellent |f'(0)|>1. The regular iteration powerseries has non-zero convergence radius.

Parabolic fixed points however have no such neighborhood but alternating attractive and repelling petals (the Leau-Fatou-flower). If the powerseries is of the form
\( z+a_m z^m + a_{m+1} z^m + \dots \), \( a_m\neq 0 \), one says it is of multiplicity \( m \).
E.g. exp(z)-1 at 0 or \( \eta^z \) at \( e \) is of multiplicity 2.
Now the Leau-Fatou-flower has m-1 attractive and m-1 repellent petals.
In our case e^x-1 the one repelling petal covers \( x>0 \) and the attractive petal covers \( x<0 \).
Each petal has an associated (regular) Abel function (and hence regular iteration).
The regular iterations of any petal have the same asymptotic powerseries in the fixed point. But this asymptotic powerseries has mostly convergence radius 0 (non-integers iterates).

I hope I could make the matter more clear for our forum members.

Quote:The bright side is, this saves me the work of having to rigorously prove various properties, as they apparently have been proven for nearly 20 years. I need only work on getting good numerical approximations (several thousands of bits of accuracy), to use in my change-of-base formula.

Haha dont dare! You have to show that your algorithm indeed reproduces the regular iteration!
And whether the corresponding superfunction for base \( b>\eta \) is holomorphic is also not solved yet, we only know about infinite differentiability by Walker.
If he uses your change of base at all, but to find out is your task now.
#12
(08/06/2009, 10:26 PM)bo198214 Wrote:
Quote:The bright side is, this saves me the work of having to rigorously prove various properties, as they apparently have been proven for nearly 20 years. I need only work on getting good numerical approximations (several thousands of bits of accuracy), to use in my change-of-base formula.

Haha dont dare! You have to show that your algorithm indeed reproduces the regular iteration!
And whether the corresponding superfunction for base \( b>\eta \) is holomorphic is also not solved yet, we only know about infinite differentiability by Walker.
If he uses your change of base at all, but to find out is your task now.

I recall he used something similar to my change of base formula; indeed, when I first read [1], it was a comment at the bottom of page 729 (the page headed "3. Values of Generalized Logarithms"), where it mentioned the h(x) function has a sufficient approximation after at most 5 iterations.

This statement seemed odd, so I worked out what he meant. It was then that I realized that for any two bases of tetration, a and b each real and greater than eta, and a sufficiently large real x, the value \( \lim_{k \to \infty} \log_a^{\circ k} \left( \exp_b^{\circ k} (x) \right) \) is well-defined, and furthermore, relatively easy to calculate to full machine precision with, usually, a very small k (arbitrarily large precision for some math libraries, limited by hardware). (Note: by "sufficiently large real x", I mean that x must be large enough so that it does not become negative with remaining logarithms to be performed, which would lead to non-unique complex results.)

I had already deduced that tetration in base eta was solvable exactly without bizarre matrix inversions or whatever, but this would correspond to "heta", the solution to the fixed point from the negative real direction. In other words, a simple, elegant formula existed. Furthermore, the formula would be provably the unique solution, be infinitely differentiable, etc. Up to that point, everything I'd read about tetration suggested that tetration was impossible to solve uniquely or with infinite differentiabilty, but the literature I had access to was sparse and outdated. So I was excited to be able to solve a base, even if it wasn't e or 2 or 10 or something more useful.

What I needed was a way to get a solution for bases larger than eta, and armed with my change of base formula, I decided to approach the fixed point from the positive real direction, which gave me the cheta function. I actually didn't realize the connection with exp(z)-1 at the time, which is kind of a shame because I got the idea from having read Walker's paper! I didn't see the connection at first because I was approaching the problem from the point of view of "tetration", as opposed to thinking in terms of Abel functions and parabolic fixed points and what-have-you.

[1] Walker, P. (1991). Infinitely differentiable generalized logarithmic and exponential functions. Math. Comput., 57(196), 723–733.
~ Jay Daniel Fox
#13
(08/06/2009, 11:05 PM)jaydfox Wrote: I recall he used something similar to my change of base formula; indeed, when I first read [1], it was a comment at the bottom of page 729 (the page headed "3. Values of Generalized Logarithms"), where it mentioned the h(x) function has a sufficient approximation after at most 5 iterations.

Walker constructs an (infinitely differentiable) auxilliary function
\( h(x)=\lim_{n\to\infty} l^{[n]}(\exp^{[n]}(x)) \), where \( l(x)=\ln(x+1) \).
That satisfies
\( h(e^x)=e^{h(x)}-1 \)

Then he composes this function with the/one regular Abel function \( g \) of \( e^x-1 \), \( g(e^x-1)=g(x)+1 \).

The resulting function \( G(x)=g(h(x)) \) satisfies
\( G(e^x)=g(h(e^x))=g(e^{h(x)}-1)=g(h(x))+1=G(x)+1 \)
i.e. is an Abel function of e^x.

So where is it similar to your change of base?
#14
For comparison let me reput your formula from an early thread.
I am only focus on the superfunction not so much on the specific constants such that it is \( 0\mapsto 1 \). Your double limit can be split into two limits, first:

\( T_b(x) = x + b^{x-1+b^{x-2 + b^{\dots}}} \) and then
the superexponential to base b:
\( \operatorname{sexp}_b(x) = \lim_{n\to \infty} \log_b^{[n]} ( T_b(x_0+x+n) ) \)
for a suitable \( x_0 \)

\( \operatorname{sexp}_b \) indeed satisfies the required equality:
\( \begin{align*}\operatorname{sexp}_b(x+1)&=\lim_{n\to \infty} \log_b^{[n]} ( T_b(x_0+x+1+n) )\\
&=\lim_{n\to \infty} \exp_b\{\log_b^{[n+1]} ( T_b(x_0+x+(1+n))\} \\
&=\exp_b\{\lim_{n\to \infty}\log_b^{[n+1]} ( T_b(x_0+x+(n+1))\}\\
&=b^{\operatorname{sexp}_b(x)}\end{align*} \)

Interestingly this does not depend that much on \( T_b \).
We can choose any function \( T_b \) as long as the limit for \( \operatorname{sexp}_b \) exists.
#15
Now let us refocus on Walker's function, he took the Abel function \( G \) for base \( b=e \) so we generalize to arbitrary \( b \) by:
\( h_b(x)=\lim_{n\to\infty} l^{[n]} (\exp_b^{[n]}(x)) \) (though I am not sure whether it still converges for b>e. Jay, Sheldon?)
It satisfies then \( h_b(b^x)=e^{h(x)}-1 \).

Also we want to have the inverse \( G_b^{-1} = h_b^{-1} \circ g^{-1} \), where
\( h_b^{-1}(x)=\lim_{n\to\infty} \log_b^{[n]}(e^{[n]}(x)) \), \( e(x)=e^x-1 \)
and \( g^{-1} \) is a regular superfunction of \( e(x)=\exp(x)-1 \), i.e. \( g^{-1}(x+1)=e(g^{-1}(x)) \).

Together this yields
\( \operatorname{sexp}_b(x)=G_b^{-1}(x) = \lim_{n\to\infty} \log_b^{[n]}(e^{[n]}(g^{-1}(x))) = \lim_{n\to\infty} \log_b^{[n]}(g^{-1}(x+n)) \)

so we have a similar case as in the previous post but with \( T_b(x) = g^{-1}(x) \).

by modifying \( T_b \) we should get tons of variants of \( \operatorname{sexp}_b \).
#16
Summarizing I would say that your sexp is not the same as Walkers, because
your \( T_b \) satisfies: \( T_b(x+1)=x+1+b^{T_b(x)} \)
while Walker's \( T_b \) satisfies \( T_b(x+1)=e^{T_b(x)}-1 \) (at least in the case b=e).
#17
(08/07/2009, 08:15 AM)bo198214 Wrote: For comparison let me reput your formula from an early thread.
I am only focus on the superfunction not so much on the specific constants such that it is \( 0\mapsto 1 \). Your double limit can be split into two limits, first:

\( T_b(x) = x + b^{x-1+b^{x-2 + b^{\dots}}} \) and then
the superexponential to base b:
\( \operatorname{sexp}_b(x) = \lim_{n\to \infty} \log_b^{[n]} ( T_b(x_0+x+n) ) \)
for a suitable \( x_0 \)
Ah, I remember that approach... I stopped pursuing this particular approach to tetration, because its answers varied from those of the cheta + base-change approach and from Andrew's solution (the "intuitive" solution?). (Not saying it's not at least partially relevant here, but just pointing out that this was my first attempt at a solution, and I dropped it in favor of cheta with base-change, which was my second attempt.)
~ Jay Daniel Fox
#18
(08/07/2009, 07:26 AM)bo198214 Wrote:
(08/06/2009, 11:05 PM)jaydfox Wrote: I recall he used something similar to my change of base formula; indeed, when I first read [1], it was a comment at the bottom of page 729 (the page headed "3. Values of Generalized Logarithms"), where it mentioned the h(x) function has a sufficient approximation after at most 5 iterations.

Walker constructs an (infinitely differentiable) auxilliary function
\( h(x)=\lim_{n\to\infty} l^{[n]}(\exp^{[n]}(x)) \), where \( l(x)=\ln(x+1) \).
That satisfies
\( h(e^x)=e^{h(x)}-1 \)

Then he composes this function with the/one regular Abel function \( g \) of \( e^x-1 \), \( g(e^x-1)=g(x)+1 \).

The resulting function \( G(x)=g(h(x)) \) satisfies
\( G(e^x)=g(h(e^x))=g(e^{h(x)}-1)=g(h(x))+1=G(x)+1 \)
i.e. is an Abel function of e^x.

So where is it similar to your change of base?

My change of base formula relies on the following:
\( \lim_{k \to \infty} \log_a^{\circ k} \left( \exp_b^{\circ k} (x) \right) \).

I had found that it was computationally more accurate to work with the double logarithm of x. Please review this post:

http://math.eretrandre.org/tetrationforu...306#pid306

It's post #36 in that thread, if the link doesn't take you right to it. I remembered last night that I had discussed the double logarithmic approach before, so this morning I searched until I found it.

As you'll see at the bottom of that post, I had previously made the connection between cheta and the iteration of e^x-1.

I want to work out the maths a little, so that I post something fairly coherent, but essentially, I believe, after reviewing things a bit, that my cheta with base-change is equivalent to the inverse of Walker's approach (i.e., equivalent to \( \hat{G}^{-1}(x) \).
~ Jay Daniel Fox
#19
(08/07/2009, 05:14 PM)jaydfox Wrote: My change of base formula relies on the following:
\( \lim_{k \to \infty} \log_a^{\circ k} \left( \exp_b^{\circ k} (x) \right) \).

just for the record; there was a discussion about this recently in sci.math. See Logarithm of repeated exponential Also an interesting msg of Robert Munafo, who used his "hypercalc" for a approximation with higher iterates.

Gottfried
Gottfried Helms, Kassel
#20
(08/07/2009, 04:18 PM)jaydfox Wrote: Ah, I remember that approach... I stopped pursuing this particular approach to tetration, because its answers varied from those of the cheta + base-change approach and from Andrew's solution (the "intuitive" solution?). (Not saying it's not at least partially relevant here, but just pointing out that this was my first attempt at a solution, and I dropped it in favor of cheta with base-change, which was my second attempt.)

but I think Sheldon posted somewhere that cheta + base-change also differs from Andrew's (intuitive) solution, or was it from Dmitrii's? Its anyway not investigated yet how Dmitrii's differs from Andrew's (they could be equal). I hope we can do some comparisons in the complex plane (because on the real axis differences are mostly too small) in the overview paper. Are you interested to participate? (I still didnt get an answer to that question.)


Possibly Related Threads…
Thread Author Replies Views Last Post
  Is there any ways to compute iterations of a oscillating function ? Shanghai46 5 477 10/16/2023, 03:11 PM
Last Post: leon
  Anyone have any ideas on how to generate this function? JmsNxn 3 1,100 05/21/2023, 03:30 PM
Last Post: Ember Edison
  [MSE] Mick's function Caleb 1 729 03/08/2023, 02:33 AM
Last Post: Caleb
  [special] binary partition zeta function tommy1729 1 655 02/27/2023, 01:23 PM
Last Post: tommy1729
  [NT] Extending a Jacobi function using Riemann Surfaces JmsNxn 2 1,006 02/26/2023, 08:22 PM
Last Post: tommy1729
  toy zeta function tommy1729 0 527 01/20/2023, 11:02 PM
Last Post: tommy1729
  geometric function theory ideas tommy1729 0 584 12/31/2022, 12:19 AM
Last Post: tommy1729
  Iterated function convergence Daniel 1 902 12/18/2022, 01:40 AM
Last Post: JmsNxn
  Fibonacci as iteration of fractional linear function bo198214 48 17,151 09/14/2022, 08:05 AM
Last Post: Gottfried
  Constructing an analytic repelling Abel function JmsNxn 0 865 07/11/2022, 10:30 PM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)