# Tetration Forum

You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hi.

I was wondering about the tetration of bases in the range $0 < b < e^{-e}$ via the continuum sum formula.

Here's how it goes. I would expect, that for at least some towers/heights/superexponents $x$, $^{x + n} b$ (presumably a complex-valued function of a real number) for increasing integer $n$ should approach the two oscillation points $M_{high}$ and $M_{low}$ where $M_{high} > M_{low}$ due to the recurrent equation $\mathrm{tet}_b(z+1) = b^{\mathrm{tet}_b(z)}$. These points solve $b^{M_{high}} = M_{low}$ and $b^{M_{low}} = M_{high}$.

So I'd propose that for all $x$ within a sufficiently small interval $(n - \epsilon, n + \epsilon)$ of an integer $n$, $^x b\ \rightarrow\ ^n b$ as $n \rightarrow \infty$ and also that $\frac{d}{dx} ^x b \rightarrow 0$.

If this is so, then we can take the difference $^x b - Sq(x)$, where $Sq(x)$ is the "square wave function"

$
Sq(x) = \begin{cases}
M_{high},\ \mathrm{if}\ \mathrm{nint}(x)\ \mathrm{is\ even} \\
M_{low},\ \mathrm{if}\ \mathrm{nint}(x)\ \mathrm{is\ odd}
\end{cases}
$

where "nint" is the nearest-integer or round function. Thus for $x$ a sufficiently small distance from an integer, $^x b \rightarrow Sq(x)$ as $x \rightarrow \infty$, and so $^x b - Sq(x) \rightarrow 0$. This means we can continuum-sum $f(x)\ =\ ^x b - Sq(x)$ via Mueller's formula

$\sum_{n=0}^{x-1} f(n) = f(0) + \sum_{n=1}^{\infty} f(n) - f(n + x - 1)$.

Note that this only requires the values at integers and at x stepped by an integer, so all evaluation points involved are within the necessary small intervals of integers and the approximation is assumed to hold. Then if we add the continuum sum of $Sq(x)$ to the result, we have continuum-summed the tetration and we can apply the Ansus continuum sum formula. There is in fact such a continuum sum for $Sq(x)$: we derive it from the Fourier series of the square wave and sum that with Faulhaber's formula on the trig. It yields a discontinuous function that jumps between two linear functions (like how the square wave jumps between two constant functions), though I don't have the formula on hand right now.

The only problem here is that we can only work in a small interval (in order for the approximation to hold): this means we cannot evaluate at, say, both -1 and 0 as we would need to find the correct normalization constants and the definite integral for the formula. If however the function is analytic, we should be able to reach those places via analytic continuation, yet analytically continuing an arbitrary Taylor series would probably require something like, you guessed it, the Mittag-Leffler formula. (Plus we need the continuation anyways to fill a whole length-1 interval). And why are the coefficients so gosh darned difficult to find for that puppy?!

What do you think of my theory? Also, can you give a graph of the regular superexponential of such a base developed the repelling real fixed point? Even though it's not the tetrational we want, it might nonetheless help to get a general idea of what the behavior may be like. Helms posted a graph for $b = 0.04$ once, but it was a parametric plot, not a plot along the x-axis.
(12/10/2009, 02:30 AM)mike3 Wrote: [ -> ]What do you think of my theory?

Hm, well, for me it sounds quite "constructed".
I think you should point out what criteria your super-exponential should satisfy.
I mean we have already a solution (see the predecessor thread) but somehow you dont find it satisfactory. So what kind of solution do you want? Please specify!

Quote: Also, can you give a graph of the regular superexponential of such a base developed the repelling real fixed point? Even though it's not the tetrational we want, it might nonetheless help to get a general idea of what the behavior may be like. Helms posted a graph for $b = 0.04$ once, but it was a parametric plot, not a plot along the x-axis.

What plot do you want? He posted the super-exponential $f(x)={\exp_{0.04}}^{\circ x}(0.5)$ for real $x$.
(12/10/2009, 02:06 PM)bo198214 Wrote: [ -> ]
(12/10/2009, 02:30 AM)mike3 Wrote: [ -> ]What do you think of my theory?

Hm, well, for me it sounds quite "constructed".
I think you should point out what criteria your super-exponential should satisfy.
I mean we have already a solution (see the predecessor thread) but somehow you dont find it satisfactory. So what kind of solution do you want? Please specify!

Well, the one that solves

$\log_b\left(\frac{\mathrm{tet}'_b(x)}{\mathrm{tet}'_b(0) \log(b)^x}\right) = \sum_{n=0}^{x-1} \mathrm{tet}_b(n)$

when a suitably "natural" definition of continuum sum is used, and the conditions

$\mathrm{tet}_b(-1) = 0$
$\mathrm{tet}_b(0) = 1$

so $\mathrm{tet}_b(x) = \exp^x_b(1)$. If by "already have a solution" you mean the regular iteration, no, I don't consider it satisfactory, as it cannot solve the two other restrictions mentioned above, even though it should, considering the behavior of tetration elsewhere.

For the "natural definition of continuum sum", since I don't yet have a working method for defining it on a power series, one has to try something else. The idea was based on a paper about fractional sums by someone named Mueller, who mentioned the following "natural" formula one can obtain from certain sum identities:

$\sum_{n=1}^{x} f(n) = \sum_{n=1}^{\infty} f(n) - f(n + x)$

provided the sums $\sum_{n=1}^{\infty} f(n)$ and $\sum_{n=1}^{\infty} f(n+x)$ converge, which I rewrote for the offset case:

$\sum_{n=0}^{x-1} f(n) = f(0) + \sum_{n=1}^{\infty} f(n) - f(n + x - 1)$.

Thus I suggested a way to get the continuum sum of this $\mathrm{tet}_b$ using the above "Mueller formula", based on the expected behavior and using that to relate it to the It turns out that Faulhaber's formula can be obtained from the above using analytic continuation with the case $f(x) = \frac{1}{x^k}$ for $k > 1$. Note the Mueller formula only depends on the behavior at $n$ and $x + n$ for $n = 0, 1, 2, ...$ and nowhere else. Then I use the idea that if we know $\sum_{n=0}^{x-1} f(x) - g(x)$ and $\sum_{n=0}^{x-1} g(x)$ at the given point, then $\sum_{n=0}^{x-1} f(x) = \left(\sum_{n=0}^{x-1} f(x) - g(x)\right) + \sum_{n=0}^{x-1} g(x)$ even if $\sum_{n=0}^{x-1} f(x)$ cannot be obtained directly.

Quote:
Quote: Also, can you give a graph of the regular superexponential of such a base developed the repelling real fixed point? Even though it's not the tetrational we want, it might nonetheless help to get a general idea of what the behavior may be like. Helms posted a graph for $b = 0.04$ once, but it was a parametric plot, not a plot along the x-axis.

What plot do you want? He posted the super-exponential $f(x)={\exp_{0.04}}^{\circ x}(0.5)$ for real $x$.

The plot for that regular iteration, but not as a parametric, rather an x-y type plot (two graphs, I'd presume, for the real and imag part). This makes it easy to see what happens with, e.g. a unit increment starting at a point x, to see if my continuum sum idea would make any sense. The plot should range over the same x-values as the Helms' one, and the x-axis should be ticked off at every unit increment from 0 (i.e. marked at 0, 1, 2, 3, ...).

Actually, I already tried graphing this once but I'd like to see if yours looks like the one I had gotten (as I used a limit formula that is probably equivalent to the regular iteration but I don't know if it was or not).
About that bit about the graph of the regular again: I tried computing it with the limit formula again, seems the computer overflows if you try for $x > 12$. So try for just $0 \le x \le 12$, and I'd like to compare the result with what I got from the limit formula. Also my grapher doesn't seem to let me get ticks at only the integer values of x!
(12/11/2009, 06:27 AM)Ansus Wrote: [ -> ]
(12/10/2009, 08:48 PM)mike3 Wrote: [ -> ]$\sum_{n=0}^{x-1} f(n) = f(0) + \sum_{n=1}^{\infty} f(n) - f(n + x - 1)$.

Or

$\sum_{n=0}^{x-1} f(n) =- f(0) + \sum_{n=0}^{\infty} f(n) - f(n + x)$.

$\sum_{n=0}^{x-1} f(n) =f(0) - \sum_{n=0}^{\infty} f(n + x)-f(n)$.

But this only works for functions which go to 0 at infinity.

If it does not go to 0 but to a fixpoint: can we subtract that constant/fixpoint and compensate for that by introducing + zeta(0)*fixpoint ? (In some fiddling in other examples weeks ago such a compensation seemed to be meaningful, don't have it at hand at the moment, though)

Gottfried
(12/11/2009, 08:16 AM)Gottfried Wrote: [ -> ]If it does not go to 0 but to a fixpoint: can we subtract that constant/fixpoint and compensate for that by introducing + zeta(0)*fixpoint ? (In some fiddling in other examples weeks ago such a compensation seemed to be meaningful, don't have it at hand at the moment, though)

Gottfried

Yes, which was the idea in the original post. Here, though, we're talking about bases like $b = 0.04$, for which you drew a graph of its regular iteration. I presume the tetration (that is, with $F(0) = 1$) would look something like that, so $F(x + n)$ (n = 0, 1, 2, 3, ...) would converge to a 2-cycle as $n \rightarrow \infty$ for most $x$, but diverge for a few others. For summing on the x-values where there is a 2-cycle, we compensate by subtracting a square wave and then adding its continuum sum as acquired via the Fourier series.
I just managed to get a code going for doing the regular iteration of $b = 0.04$ using the limit formula. I thought this would be a good "Test Bed" or "proof of concept" for this continuum sum method.

Namely, I'm testing the summing of $\mathrm{reg}_{0.04}(z)$ where $\mathrm{reg}_{0.04}(0) = 0.5$.

The graph of this function, $F(x) = \mathrm{reg}_{0.04}(x)$ for $x$ from 0 to 12, is given below.

Real part:
[attachment=674]

Imag part:
[attachment=675]

Note the very large amplitude oscillation spikes in the graph. The captions at the bottom show the apparent maximum complex magnitude of the function. This appears to be an actual characteristic of the function, it is not a numerical error, and the peak amplitude seems to grow at least tetrationally -- the next oscillation is too intense for the computer to handle, presumably due to a lack of sufficient dynamic range in the floating point math (and we're using bignums here, apparently they don't have bignums in the exponent! Aside: could this be an application for a tetrational representation of extremely gigantic numbers? ). This spiking phenomenon is interesting, as it reminds me of the Gibbs phenomenon in the Fourier approximations of the discontinuous square wave, though with unlimited magnitude and possibly nonzero width in the asymptotic.

This is why I mentioned that for only "most" points in the interval, the function converges to the 2-cycle. If we exclude an interval $[n + 0.4, n + 0.6]$ (this might actually be too big an interval, but it's enough for illustrative purposes) for integers $n$, we get this graph (painstakingly prepared by hand from output from the Pari/GP grapher, because you can't exclude intervals I just set the function to +/-3 at those intervals for graphing and then removed the up/down spikes that resulted by hand) when letting x vary from 0 to 30, and this graph shows of the real part only, the imag part decays to zero, so this one better illustrates the square wave behavior.

[attachment=676]

Interestingly, the convergence to the square wave seems to be more rapid against $M_{low}$ than $M_{high}$. Why is this? Using the square-wave approximation with the discontinuity placed at $n + 1/2$ (the exact placement in the interval doesn't affect the value), we get an idea of the appearance of the continuum sum $\sum_{n=0}^{x-1} \mathrm{reg}_{0.04}(n)$ (and using 256 Mueller terms):

Real part:
[attachment=677]

Imag part:
[attachment=678]

Note how it seems we could connect through the gaps at the leftmost parts with a smooth curve (I bet it spikes up though in the gaps toward the right though but there, of course, the square wave approximation ceases to be valid, though I'm unsure of the spiking behavior, if any, in the first gap or two). It would seem that in order to use this method for the full tetration $\mathrm{tet}_{0.04}(x)$, we'll need a way to analytically continue from a limited interval to a bigger one. Mittag-Leffler expansions are one possibility but their convergence is very slow as was demonstrated in the thread on that subject here. Any ideas about this?
(12/20/2009, 06:27 AM)Ansus Wrote: [ -> ]Did you try Mueller formula with conventional bases 1<b<e^(1/e) ?

Yes. It converges to a value that looks to agree with the regular iteration at the attracting fixpoint, though the convergence (of the iterating method) is really slow at $b = e^{1/e}$. But the upshot, however, is that it doesn't seem to have as strongly escalating precision requirements using the regular iteration limit formula does.

For Mueller's formula with a fixed point we have

$\sum_{n=0}^{z-1} f(n) = zL + f(0) + \sum_{n=1}^{\infty} f(n) - f(n + z - 1)$

where $\lim_{z \rightarrow \infty} f(z) = L$ in the right halfplane. This allows us to run the tetration for those bases via a Taylor series expanded at $z = 0$. For computation, I compute $f(n + z - 1)$ recursively by power series exponential.

Using 64 terms of power series, 38 digits of precision, Mueller sum up to 256 terms, and 128 iterations starting with an initial guess on the series of 1, I get $^{1/2} \sqrt{2} \approx 1.24362162766852180429$ (I discard the rest of the digits beyond where the residual (difference between $\sqrt{2}^{F(-0.5)}$ and $F(0.5)$) is of order of the first discarded digit. The residual gives an idea of the accuracy of the approximation.). Bumping the power series term count up to 128 terms yet with no further increase of the numerical precision yields $^{1/2} \sqrt{2} \approx 1.2436216276685218042950989836094029317$, which agrees completely with the regular iteration limit formula when rounded to 38 digits of precision, suggesting that the regular iteration and Mueller sum approaches yield the same function. And the best part about this that it looks more efficient for computation. Namely, it gives us a Taylor expansion we can reuse (so that once we "initialize", we can do any real height without so many expensive computations), and it requires no increase in precision beyond the level we need (I think even the series formula for regular iteration requires ever more "slop" precision like the limit formula does.).

And if you want a graph...
[attachment=679]
(that is $y = {}^{x} \sqrt{2}$ obtained from the Mueller-summed continuum sum formula)