# Tetration Forum

You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Dmytro Taranovsky asks in his post, https://mathoverflow.net/questions/28381...rent-bases ... "
... Let a and b be real numbers above e^{1/e}, and c<d be real numbers.  Do we have
$\lim_{x \to \infty}\;\frac{\exp_a^c(x)} {\exp_b^d(x)} \;= 0$
... The relation with this question is that if the above limit holds, it gives evidence that fractional exponentiation provides a natural growth rate intermediate between (essentially) quasipolynomial and quasiexponential.

This question seems to have lots of room for discussion so I thought I would post it here.  Here are my conjectures related to this question.

(1) If you use Peter Walker's  http://eretrandre.org/rb/files/Walker1991_111.pdf Tetration solution appropriately extended to all real bases > exp(1/e), then my conjecture is that the limit holds.  Unfortunately, Peter Walker's solution is also conjectured to be nowhere analytic; see: https://math.stackexchange.com/questions...lytic-slog

(2) That for Kneser's solution, there are counter examples and the limit does not hold, even with the restriction that c<d, even with the additional restrictions that a<b, there are cases where x is arbitrarily large and
$\exp_a^c(x)\;>\;\exp_b^d(x)$

(3) Given any analytic solution for base(a), then if we desire a tetration solution for base(b) which has the desired property then I conjecture that the tetration base b is nowhere analytic!  The conjecture is also that the slog for b would be given by a modification of Peter Walker's h function.
(03/28/2018, 07:57 PM)sheldonison Wrote: [ -> ]... Let a and b be real numbers above e^{1/e}, and c<d be real numbers.  Do we have
$\lim_{x \to \infty}\;\frac{\exp_a^c(x)} {\exp_b^d(x)} \;= 0$
Another observation is that if c=1, c<d then limit holds for all versions of tetration, even if a<b, assuming the derivative is strictly increasing.  If c=d=1, and a<b, then we would have
$\exp_a(x)<\exp_b(x)$
but if $d=1+\delta$ so d is a little bit bigger than c=1, then eventually for large enough values of x even though a<b.
$\exp_a^{(1+\delta)}(x)\;>\;\exp_b(x)\;\;\;$ but only if x is big enough when a<b.
I think I could come up with an approximation for how big x has to be for this equation to hold.

This line of reasoning sheds some light on the general question which includes when c<>1, and helps one understand why Walker's solution works.
It seems upsetting that the only tetrations that could satisfy this are non-analytic. I'm not prone to believe this, only because it doesn't look nice...

I'm wondering if we can look at it the following way

$\exp_b^{d}(x) = \exp_b^{c}(\exp_b^{\delta}(x))$

Then the question boils into whether, for all $\delta, \delta' >0$ then

$\exp^c_{b+\delta}(x) = o(\exp_b^{c+\delta'}(x))$

I think we can show this is true when $c \in \mathbb{N}$. By induction, first, for $c = 0$ the induction step is obvious. Namely $x = o (\exp_b^{\delta'}(x))$. Suppose the result holds for $c = n$ then (taking $<$ to mean asymptotically less than):

$\exp_{b+\delta}^{n+1}(x) =\exp^{n}_{b+\delta}(\exp_{b+\delta}(x))<\exp_{b}^{n+\delta'/2}(\exp_{b+\delta}(x)) < \exp_{b}^{n+\delta'/2}(\exp_b^{1+\delta'/2}(x)) = \exp_b^{n+\delta'+1}(x)$

Sadly, I can't think of anyway to generalize this to non-integral $c$....

I'm thinking, a nice way to look at it from here is to look at root functions of the $\exp$ function. But then we'd need an implication $f^{\circ n}(x) = o(g^{\circ n}(x)) \Rightarrow f(x) = o(g(x))$, which looks like it could be true for monotonically growing unbounded functions. But that's probably too easy, we'd probably need a nice condition on the root functions for that to be true.

EDIT:

It appears I made a fruitful mistake in the above proof. The base induction step would have to be (1) $\exp_{b+\delta}(x) < \exp_b^{1+\delta'}(x)$, not the obvious one $x < \exp^{\delta'}(x)$. This is the base step I should have used. The proof then says if this base step (1) holds the result holds for all natural $c$, namely $\exp_{b+\delta}^n(x) < \exp_b^{n+\delta'}(x)$. And I think with some finesse we can show that this implies it's true for root functions of $\exp_b$, which should leave for a proof where $c \in \mathbb{Q}$. Then perhaps a density argument may work on non rational $c$. I'll work on this more later, but I think maybe we can reduce this entire problem into the condition that if for all $\delta,\delta'>0$ we have $\exp_{b+\delta}(x) < \exp_b^{1+\delta'}(x)$ then it follows that $\exp_{b+\delta}^c(x) < \exp_b^{c+\delta'}(x)$.

...We'll probably have to assume that $\exp_b^c(x)$ is monotone non-decreasing in $x$ and unbounded, or at least, eventually monotone non-decreasing.
(03/30/2018, 07:37 PM)JmsNxn Wrote: [ -> ]It seems upsetting that the only tetrations that could satisfy this are non-analytic. I'm not prone to believe this, only because it doesn't look nice...

I started the thread in 2009, before I had written generic programs for analytic tetration for any base, and I was using an excel spreadsheet to approximate analytic tetration.  I estimated that as x gets arbitrarily large
$\text{slog}_2(x)-\text{slog}_e(x)\approx1.1282$
The 2009 thread continues on to discuss what I called "the wobble"...
Credit needs to go to William Paulsen and Samuel Cowgill in their upcoming paper which discusses these issues more rigorously than I can.  But it was quickly clear in the 2009 thread that there is an inherent wobble when comparing tetration bases; this was apparent for bases a little bit bigger than eta=exp(1/e) using straightforward techniques on an excel spreadsheet.  The limit as x get arbitrarily large does not converge to a simple number like the 1.1282 estimate, but instead converges to a 1-cyclic function near that value.  For base(2) and for base(e), if you use Kneser's construction; then the 1-cyclic limit is graphed below.
$\text{slog}_e(\text{tet}_2(x))-x$
[attachment=1303]

On this forum, other ideas like "the base change function" were discussed, where you use Peter Walker's idea to define tetration base(a) from tetration base(b).  For example, you could define tetration base(2) from Kneser's tetration base(e).  The relevant equations might look something like this.  But the "h" function below is conjectured to be nowhere analytic, even though Walker proved it is $C^{\infty}$ for the case in his paper.  Walker defined the base(e) slog from the Abel function for iterating $x\mapsto\exp(x)-1$.  This is mathematically conjugate (or exactly equivalent) to iterating base eta.  $y\mapsto\eta^y\;\;\eta=\exp(1/e)\;\;x=\frac{y}{e}-1$

$h_n (x)=\ln_b^{[n]}(\exp_a^{[n]}(x))$

$h(x)=\lim_{n\to\infty}h_n (x)$

$\text{slog}_b(x)=\text{slog}_a(h(x))-\text{slog}_a(h(1)));$ /* constant to guarantee slog_b(1)=0 */
Okay, so the question and your intuition relates to the old base change formula and that it failed to be analytic. That makes sense, but is disappointing to think we're going to lose this property if we choose an analytic solution.

So what this slog limit is saying is that for ''good'' analytic tetrations: $f(x)=\exp_{b+\delta}^{c}(x) - \exp^{c+\delta'}_b(x)$ changes sign infinitely often (given $\delta,\delta'<\epsilon$)?

This reminds me of something.

I've dealt with those limits before and felt discouraged at an ability to prove uniform convergence. Given holomorphic $f,g : \mathbb{D} \to \mathbb{D}$ where $f(0) = g(0) = 0$, when trying to find a function $\Psi:\mathbb{D}\to\mathbb{D}$ such that $\Psi(f(z)) = g(\Psi(z))$, the natural choice is $\Psi(z) =\lim_{n\to\infty} g^{-n}(f^{n}(z))$ (which never seems to work). But it sure does look nice.

The only way this works, I found, is to assume $f'(0) = g'(0) = \lambda$ and take the Schroder function of both functions $h_0, h_1$ where $h_0(f(z)) = \lambda h_0(z)$ and $h_1(g(z)) = \lambda h_1(z)$ and then $\Psi(z) = h_1^{-1}(h_0(z))$ which works locally. Then the above limit for $\Psi$ is convergent. But we had to sacrifice a lot to get there.

Of course if we're working on a non simply connected set $H$ instead of $\mathbb{D}$ and we assumed that $f,g$ had no fixed points on this set, this could work. But tetration takes $\mathbb{C}/\{z \in (-\infty,-2)\}\to \mathbb{C}$, so it probably has fixed points (maybe this is provable). Which should guarantee a base change function $h$ is non extendable to $\mathbb{C}/\{z \in (-\infty,-2)\,\}$.

This is kinda' helping me understand why these functions fail to be analytic. No conjugation can change the multiplier value and clearly $^ze$ will have a different multiplier at its fixed point as $^z 2$ will have at its fixed point.

I'll have to read Walker's paper. The only work around I had to this was working with Schroder functions and when dealing with the real line where there are no fixed points I can't imagine a manner of getting a nice uniform convergence.

I'm still wondering if I can prove that if

$\exp_{b+\delta}(x) < \exp_b^{1+\delta'}(x)$

then

$\exp_{b+\delta}^c(x) < \exp_b^{c + \delta'}(x)$

which could then be a condition for tetration to be non-analytic.

Still seems like a lot of this is up in the air though. I apologize if this has me a bit scatter brained.