The "cheta" function
#1
I have posts strewn about the forum, discussing a "cheta" function. If, as is likely if you are not one of the handful of original members of this forum, you have no idea what the "cheta" function is, allow me briefly to explain. (If you are impatient, I get to the details in the second post, this first being background.)

The number \( e^{1/e} \) makes a particularly interesting base when discussing tetration. I nicknamed it \( \eta \), the greek letter eta, a letter which is similar in function to the letter e. Just as the constant e is a (perhaps the most) useful base for discussing and analyzing exponentiation, eta is a (and I once thought, the most) interesting base for discussing and analyzing tetration.

The name has stuck, and you will see \( e^{1/e} \) called eta throughout the forum.

When I first began studying tetration, I looked at it from the point of view of continuously iterating the exponentiation of a base. It made sense to me, then, to restrict myself to looking at \( {}^{n}b = \exp_b^{\circ n}(1) \), that is, starting from 1 and exponentiating n times, where the result is well-defined for non-negative integers n, and can even be fairly well defined for integers n >= -2. The problem, then, was to extend the solution to real (or at least rational) n.

For base eta, the tetrates as n goes to infinity approach e. That is, \( \lim_{k \to \infty} \left( {}^{k}\eta \right) = e \). (Note here that I used k instead of n, due to eta's unfortunate resemblance to the letter n.)

So what about "cheta"? Well, borrowing an idea from a 1991 paper by Peter Walker (referenced elsewhere on this site, and a valuable read), I developed a change of base formula for tetration. Unfortunately, this change of base formula works from the "top down", as opposed to working from the "bottom up".

By this, I mean that, rather than starting at 1 and exponentiating up, I would start at positive infinity and iteratively take logarithms down to 1 (or another suitable finite constant). Doing this in two different bases, I found an (almost) linear relationship, which was the basis of the change-of-base formula. Of course, this involves using limits to approximate the infinity, but it's provably valid (though I'm not sure I formalized the proof to Henryk's liking).

The problem for base eta is that, just as the infinitely iterated exponentials, starting at 1, asymptotically approaches e, so do the infinitely iterated logarithms, starting at infinity. Thus, this meant that my "change of base" formula was not for tetration per se, but rather for a specific variant of iterated exponentiation.

I realized that there were two separate "graphs", if you will, of iterated exponentiation base eta, so I called the upper one cheta, for "checked eta", written \( \check{\eta} \). (I suppose the lower one would be "heta", for "hatted eta", written \( \hat{\eta} \)?)

Hopefully it's clear what the check (and hat) mean: it indicates the direction of concavity/convexity, and hence which variant of superexponentiation we are dealing with.

And that, in a nutshell, is what cheta is.
~ Jay Daniel Fox
#2
There is the nagging detail of picking a particular "starting point", a real constant greater than e. With further investigation of this function (and yes, I'm fairly certain it is a complex function, with no branch cuts, and I'm fairly certain without any singularities), I might someday find an intuitively "correct" starting point.

However, for the time being, I use \( e^{2} \) as my starting point. This has the useful property that -1, 0, and 1 gives very nice values:

\( \check{\eta}(-1) = e+e = 2e = e^{[2]}2 \)
\( \check{\eta}(0) = e \times e = e^2 = e^{[3]}2 \)
\( \check{\eta}(1) = e^e = {}^{2}e = e^{[4]}2 \)

(Note the bracketed notation is special operator, the name of which I forget, but which is fairly well discussed elsewhere in the forum.)

I suppose 2e would also make a good starting point (shifting the above special values to 0, 1, and 2), but I like e^2. It allows me to check that my first positive and negative iterate are correct, using easily remembered values.

Thus, without further ado, the cheta function is defined as:

\( \check{\eta}(z) = \exp_{\eta}^{\circ z}(e^2) \)
~ Jay Daniel Fox
#3
The next nagging point is that we need a uniqueness criterion (this is not the same as the "uniqueness" criteria that we often discuss on this forum). After all, the negative iteration of exponentiation (i.e., the logarithm) has branches. This is a problem in general when dealing with discussions of superexponentiation in any base.

Luckily, it is possible to further rewrite the definition of the cheta function so that branches are an impossibility. This can be done by only allowing "forward" iterations, so to speak:

\( \check{\eta}(z) = \lim_{k \to \infty} \exp_{\eta}^{[\circ k+z]} \left( \log_{\eta}^{[\circ k]} \left( e^{e} \right) \right) \)

In doing this, k+z will always have a positive real part, so we needn't ever worry about logarithms, and hence about branches or singularities. (What about the imaginary part? Couldn't that lead to singularities? Well, let's wait and see...)

For each of the k iterations of the logarithm (in the limit), we can choose any branch we want, and each choice gives us a different variant of the cheta function. Interestingly, all these variations are related to each other, but more on that later.

Intuitively, then, we would simply choose all iterations of the logarithm to use the principal branch.


** Note: the brackets in the "functional iteration" operator are there for clarity: compare \( \exp_{\eta}^{[\circ k+z]}(\ldots) \) and \( \exp_{\eta}^{\circ k+z}(\ldots) \)

On an amusing sidenote, the \( \circ k \) can be read as "OK" (oh kay) or "Circle K", both of which have meanings in the USA (the former is an idiomatic term which most English speakers, even non-native ones, would know; the second is the name of a gas station chain). I prefer "circle K", as I'm less likely to think somebody means a gas station when discussing math.
~ Jay Daniel Fox
#4
You are probably wondering what the point of all this has been. Well, my interest in the cheta function began with the base-change formula, but another area of interest for me is the problem of parabolic iteration at the fixed point for exponentiation in base eta.

I will begin to examine both problems at various times, and I may branch this out into multiple threads as I get results that I want to focus on.
~ Jay Daniel Fox
#5
(08/05/2009, 09:37 PM)jaydfox Wrote: There is the nagging detail of picking a particular "starting point", a real constant greater than e. With further investigation of this function (and yes, I'm fairly certain it is a complex function, with no branch cuts, and I'm fairly certain without any singularities), I might someday find an intuitively "correct" starting point.

However, for the time being, I use \( e^{2} \) as my starting point. This has the useful property that -1, 0, and 1 gives very nice values:

\( \check{\eta}(-1) = e+e = 2e = e^{[2]}2 \)
\( \check{\eta}(0) = e \times e = e^2 = e^{[3]}2 \)
\( \check{\eta}(1) = e^e = {}^{2}e = e^{[4]}2 \)
I suppose that e^2 isn't a completely arbitrary starting point. In addition to having the nice properties from above, both 2e and e^2 have other reasons to recommend them as "preferred" starting points.

As Henryk pointed out a few days ago, there is a simple linear transform to get from the cheta function to the continuous iteration of \( e^{x}-1 \):

bo198214 Wrote:
tommy1729 Wrote:so tetration base e^1/e is linked to iterations of e^x - 1.

Yes, the link is a linear transformation \( \tau \).
If you set \( \tau(x)=e(x+1) \), and \( f(x)=e^{x/e} \) and \( g(x)=e^x-1 \) you have the relation:
\( g = \tau^{-1} \circ f \circ \tau \).

If you consider the iterator of e^x - 1, then x=1 seems a very logical starting point (since 0 makes no sense). Next, apply Henryk's tau function to x=1, and you get 2e.

Interestingly, there is another function we can use, one I prefer for reasons related to computing the base-change formula I mentioned. It is:

\( G(x) = e^{(x+1)} \)

If you apply this G function to x=1, you get e^2. Apply it to ln(2), the first negative (or "backward") iteration of exp(x)-1, and you get 2e. Apply it to e-1, the first positive (or "forward") iteration of exp(x)-1, and you get e^e.

So as I said, 2e and e^2 are both better candidates for a starting point than most other purely arbitrary numbers, but I don't see a strong reason to choose one over the other. I simply prefer e^2. It's useful computationally, and it lets me verify in my head that forward and backward iterations are correct.
~ Jay Daniel Fox
#6
(08/05/2009, 09:37 PM)jaydfox Wrote: With further investigation of this function (and yes, I'm fairly certain it is a complex function, with no branch cuts, and I'm fairly certain without any singularities), ...
I mentioned before that the cheta function is indeed a true "function", i.e., it does not take on multiple possible values, depending on which branch of the function you are in. There are no branches!

Indeed, as far as I can surmise, it is an entire function, much like simple exponentiation. If that doesn't surprise you, consider for a moment that for all bases greater than eta, the superexponentiation function branches, much like a logarithm, and is therefore a multi-valued function.

To see that the cheta function is indeed an entire function, we need merely perform a simple thought experiment. Start with any complex number, excepting the tetrates of eta between 0 and e, inclusive (0, 1, eta, eta^eta, ..., e). Call this number \( x_0 \). (For clarity, that is "x naught" or "x sub zero", as opposed to the tetration of base zero).

Starting with \( x_0 \), begin taking logarithms, base eta. For simplicity, we will only use the principal branch, but we will generalize thereafter.

As we perform iterative logarithms, eventually, something remarkable will happen. The sequence of iterated logarithms will begin to converge to e, from the positive real direction. This might not be immediately obvious, and it is key to understanding the entirety of the cheta function. Take a moment to convince yourself that this is indeed the case.

We can now generalize to include using various branches of logarithms. Pick any complex number, except 0. Perform iterated logarithms, choosing arbitrary branches. Note: If e or an integer tetrate of eta was chosen, such as 1 or eta, etc., then a branch must be chosen that avoids 0 or e during the iterations!

Then, at some point, stop. You now have a new complex number, which will still comply with the original exceptions I noted. At this point, follow the principal branch, and as before, convergence on e from the positive real direction is assured.

Considering that all complex numbers (except 0) will eventually converge on e (from the same direction), regardless of whatever branching set we chose for the first few iterated logarithms (except where forced in cases to avoid 0 or e), we can now define cheta.

You see, as the iterated logarithms approach e, they will do so at a rate that seems to slow to a crawl. This is good, because it essentially guarantees that successive iterates will behave almost linearly near e. For practical purposes, this convergence is too slow to be useful, but for theoretical purposes it suffices. In the limit as the number of iterated logarithms goes to infinity, the relation becomes linear, and we can then use linear interpolation to define a function based on iteration count. This allows us to leave the domain of integer iterations and move into the domain of complex iterations. We then iteratively exponentiate our way back "out of the well", so to speak, until we get back to the starting point.

Take a moment to convince yourself that in so doing, we have guaranteed that we can continuously iterate exponentiation in base eta, sufficient to reach all complex numbers (except 0) from any starting point (except 0), using any particular branching system we want (except where a 0 or e would be generated by a logarithm).

So, it should also be clear, hopefully, that there aren't "multiple" cheta functions, based on different starting points. They are related by a simple linear transformation on the iteration count (adding an appropriate constant, in fact), regardless of the starting point or branching system desired.

And I should be careful, because when I say branching system, I don't mean branches in cheta. I mean, for example, that we might choose a starting point like e^2 (as I do), but desire that cheta(-1) be 2e + 2e*pi*i. I'll call this variant chmeta. Of course, chmeta(1) would still be e^e, but we would rightly expect that continuous iteration between chmeta(0) and chmeta(1) would stray off the real line. However, there is a complex constant k, such that chmeta(z+k) would be equal to cheta(z). Thus, it's not a different branch, just in a different part of the domain, which might not be immediately obvious.
~ Jay Daniel Fox
#7
Where is the definition? Until now I only saw formulas that use non-integer iteration.
Then it seems cheta is just a superfunction of \( e^{x/e} \).
The question suggests itself, whether cheta and heta are just the two regular iterations of \( e^{x/e} \), i.e. the ones that Walker describes in his article; as far as I remember they are entire.

edit: yes now I remember he showed in a different article that \( e^z-1 \) has an entire superfunction, while in the mentioned article he describes the two Abel functions (i.e. inverse superfunctions). Thatswhy I mentioned these articles in reply to your reboarding, that you would make sure that your method is not just that of Walker.
#8
(08/06/2009, 08:55 PM)bo198214 Wrote: Where is the definition? Until now I only saw formulas that use non-integer iteration.
Then it seems cheta is just a superfunction of \( e^{x/e} \).
The question suggests itself, whether cheta and heta are just the two regular iterations of \( e^{x/e} \), i.e. the ones that Walker describes in his article; as far as I remember they are entire.

edit: yes now I remember he showed in a different article that \( e^z-1 \) has an entire superfunction, while in the mentioned article he describes the two Abel functions (i.e. inverse superfunctions). Thatswhy I mentioned these articles in reply to your reboarding, that you would make sure that your method is not just that of Walker.
Hmm, I've only read the one paper, and that was before this forum started. I see you referenced a couple others in another thread. You also mentioned having all three as PDF's. Would you be able to email those me, by chance?
~ Jay Daniel Fox
#9
(08/06/2009, 09:33 PM)jaydfox Wrote: Would you be able to email those me, by chance?

Done.
#10
(08/06/2009, 08:55 PM)bo198214 Wrote: Where is the definition? Until now I only saw formulas that use non-integer iteration.
Then it seems cheta is just a superfunction of \( e^{x/e} \).
The question suggests itself, whether cheta and heta are just the two regular iterations of \( e^{x/e} \), i.e. the ones that Walker describes in his article; as far as I remember they are entire.
Hmm, that sparked a memory of a long-forgotten converstation we had:
http://math.eretrandre.org/tetrationforu...ght=entire

It's amazing how much more sense all of that makes now (and clear from my posts where my misunderstandings at the time were, as well as my gaps in understanding complex analysis). And yes, looking at just the descriptions in that old discussion, it would appear that cheta is nothing new.

The bright side is, this saves me the work of having to rigorously prove various properties, as they apparently have been proven for nearly 20 years. I need only work on getting good numerical approximations (several thousands of bits of accuracy), to use in my change-of-base formula.
~ Jay Daniel Fox


Possibly Related Threads…
Thread Author Replies Views Last Post
  Is there any ways to compute iterations of a oscillating function ? Shanghai46 5 477 10/16/2023, 03:11 PM
Last Post: leon
  Anyone have any ideas on how to generate this function? JmsNxn 3 1,100 05/21/2023, 03:30 PM
Last Post: Ember Edison
  [MSE] Mick's function Caleb 1 729 03/08/2023, 02:33 AM
Last Post: Caleb
  [special] binary partition zeta function tommy1729 1 655 02/27/2023, 01:23 PM
Last Post: tommy1729
  [NT] Extending a Jacobi function using Riemann Surfaces JmsNxn 2 1,006 02/26/2023, 08:22 PM
Last Post: tommy1729
  toy zeta function tommy1729 0 527 01/20/2023, 11:02 PM
Last Post: tommy1729
  geometric function theory ideas tommy1729 0 584 12/31/2022, 12:19 AM
Last Post: tommy1729
  Iterated function convergence Daniel 1 902 12/18/2022, 01:40 AM
Last Post: JmsNxn
  Fibonacci as iteration of fractional linear function bo198214 48 17,152 09/14/2022, 08:05 AM
Last Post: Gottfried
  Constructing an analytic repelling Abel function JmsNxn 0 865 07/11/2022, 10:30 PM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)