# Tetration Forum

Full Version: Comparing the Known Tetration Solutions
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3
I couldnt wait for Andrew, instead just computed Andrew's (analytic) solution ${}^xe$ also on the interval $[-248/128, 256/128]$ with Maple (based on his Maple code):
[attachment=8]

It would be the whopper of the time if both are equal ...

I attach the numeric argument value pairs for comparison.
Unfortunately, my method and Andrew's would appear to give different results. The differences are fairly small, at least close to the origin. Obviously, even a tiny discrepancy gets magnified tremendously after a few exponentiations. The data points you provided weren't precise enough for me to calculate a usable fifth derivative, which is where the differences between our methods really seem to show up. And to be honest, Andrew's fourth derivative looks a little better than mine (aesthetically, if that makes sense). I'd like to see graphs of the fifth derivative of Andrew's version.

For comparison, here is my graph, along with the first five derivatives. Blue, light-blue, and sea green are the 0th, 2nd, and 4th derivatives, while red, orange, and yellow-orange are the 1st, 3rd, and 5th derivatives, respectively.

[attachment=11]

To be honest, I'm less than satisfied with the results. It's not numerical inaccuracies: my numbers are "precise" to about 100 decimal digits. By precise I mean higher numerical precision in the calculations won't change the results if we only look at the first 100 digits. Accuracy might be another matter altogether.

Anyway, could someone generate a dataset for Andrew's numbers, sufficient to extract the 5th derivative? More data points, and at least 13-15 digits of precision?

And until I figure out how to output the results to a text file, here are a few data points of interest:

0.5, 1.64515080754212070699721
e, 2058.05985438912517767154
pi, 36940638694.936515311638
jaydfox Wrote:0.5, 1.64515080754212070699721

Hm, the slog powerseries converges quite slowly so I computed Andrew's slog of 1.64515080754212070699721 with estimated 7 digits precision (which are 120 terms of the sequence for this value, I also looked how many digits change when increasing the terms of the powerseries).
And indeed they differ:
slog(1.64515080754212070699721)=0.49923571!=0.5
Now I compared Daniels method with Andrews method for the base $b=\sqrt{2}<\eta$ (hyperbolic case).
Daniel's approach is to consider the fixed point of $f(x)=b^x$, and to determine the unique hyperbolic iterate $f^{\circ t}$ there and then set ${}^tb=f^{\circ t}(1)$. The lower fixed point of $f$ for $b=\sqrt{2}$ is 2 (and the upper fixed point is not reachable from 1 by iterations, so it is not of importance for our method ${}^xb=\exp_b^{\circ t}(1)$).

Because I have no actual formula to compute the series expansion for the hyperbolic iterate, I used an iterative formula (which can be found in [1] and has quite some similarity with Jay's approach):
$f^{\circ t}(x)=\lim_{x\to 0} f^{\circ -n}(c^t f^{\circ n}(x))$
where f is assumed to have its fixed point at 0 and $c$ is the derivative at the fixed point, which is in our case
$c=f'(2)=\log(b) b^2 = \log(2^{1/2})2=\log(2)$. Of course there are some demands on the function f for the formula to be valid, but they are satisfied by our f, particularely $0.

In the usual way we can move the fixed point to 0 by conjugating and after iteration move it back to its original place by inverse conjugating. Resulting in this case in the formula
$f_2(x)=b^{x+2}-2, g_2(x)=\log_b(x+2)-2$ and
$f^{\circ t}(x)= \lim_{n\to\infty} g_2^{\circ n}(c^t f_2(x-2))+2$ i.e.
$\phantom{sqrt{2}}^t \left(sqrt{2}\right) = \lim_{n\to\infty} g_2^{\circ n}(\log(2)^t f_2(-1))+2$

And now guess how Andrew's and Daniel's slog compare! (At least on the picture, I didnt start exacter numerical computations)

[attachment=12]

[1] M. C. Zdun, Regular fractional iterations, Aequationes Mathematicae 28 (1985), 73-79
Here's a set of data points I calculated. I couldn't get the file format much better, with my limited knowledge of SAGE. I essentially saved the matrix to a file, removed the brackets and manually inserted carriage returns, then imported into Excel and rounded to 50 digits precision, then output a .csv file and renamed to .txt. I'll work on a better long-term system.
Using gp, I computed Andrew's solution for base e, using a 50x50 matrix. (Side question: Andrew, how much faster are other libraries at solving these large matrices?)

Even with such a short truncation, only 50 terms, it's pretty clear that at least his first 6 odd derivatives (1, 3, 5, 7, 9, and 11) are convex. If this pattern continues, it would appear that maybe even all the odd derivatives will be convex as the limit of the number of terms goes to infinity.

While not strictly necessary to satisfy the basic constraints (iterated exponential property, infinitely differentiable (hopefully)), having all the odd derivatives be convex basically ensures uniqueness. That is, conceptually, I'm pretty sure that there can be only one solution that has all its odd derivatives convex. If we try to tweak the curve in the slightest (with a cyclic function, e.g., a Fourier series), somewhere, maybe in the 7th derivative, or the 25th, the disturbance will cause a loss of convexity, which shows up two derivatives later as a negative value for that derivative.

So, while my solution is the conversion from the unique solution of base eta (in the limiting case), it would seem that base conversion is only valid for integer increments of the superexponent. I suppose this is possible, since my formula can only be explicitly proven for integer increments, and I made the mistake of assuming (Occam's razor and all) that if it worked for all integer increments, it would work for fractional increments. Oops.
Well, I've prepared some graphs in the hope that I can address the question about my super-logarithm extension. The question was "are you sure?" about my super-logarithm definition not working for bases between $1. My only answer is to look at the graphs.

These are some graphs of the first three coefficients of the super-logarithm, i.e. the Abel function of $b^x$. In each graph there are multiple curves, each curve corresponds to a specific approximation; the approximations shown are roughly $n=\{2, 3, 4, ..., 10\}$ and the independant axis represents the base $1 < b < 4$. What follows are graphs of $A_k$ as functions of b for $k=\{1, 2, 3\}$ for the functional equation $A(b^x) = A(x) +1$ with $A(0) = -1$.

As you can see, when the base is between $1, although the matrix equation has a solution, the solutions (coefficients) do not seem to converge. This is what leads me to beleive that my super-logarithm definition will only work for $b>\eta$.

Andrew Robbins
andydude Wrote:As you can see, when the base is between $1, although the matrix equation has a solution, the solutions (coefficients) do not seem to converge. This is what leads me to beleive that my super-logarithm definition will only work for $b>\eta$.

For me they look pretty convergent, they are divergent at 1. But for $b>1$ they always become "dense in line" which represents the limit. The only thing is that the more b approaches 1 the longer it takes until it stabilizes, so for some small b it hasnt stabilized for n<10 but the tendency to built a dense line is recognizable, I think.
I finally got around to comparing my solution with Andrew's. Here's a graph of $S(T(x))$, where $S(x)$ is Andrew's slog for base e, and $T(x)$ is my tetration solution for base e:

[attachment=30]

At first blush, it looks like we're giving the same results. However, if we look at a graph of $S(T(x))-x$, we can see the discrepancies:

[attachment=31]

As you can see, the peak error occurs near x=0.54+k, k an integer, and it peaks at about 0.00078 or so. That's an error on the input to Andrew's $\text{slog}^{\small -1}(x)$ function, so it gets magnified on the output as we move away from the critical interval. Since the function is essentially linear on this interval (to within a few percent), we can basically say that the error between our two solutions is about 0.1% or less on the critical interval.

My main interest now is to figure out if that cyclic function is indeed a simple sine wave, or if it has a more complex structure. If it's a pure sine wave, and if we can deduce the amplitude and offset, then we could use my solution (which can easily generate hundreds of digits of precision) to calculate Andrew's. I suspect it isn't a pure sine wave, because that would make this just too easy.

At any rate, a difficulty here is that I can only estimate the amplitude and offset based on solutions to relatively small systems with Andrew's method. I say "relatively" small, because 560 terms seems like a lot (it took 10.5 hours in SAGE, which seems to be using the maxima engine), and yet given the convergence behavior, I still don't have enough information to understand it. I would need a much larger solution, possibly a system with thousands of terms, and that moves us into supercomputer territory. Any chance we can convince someone with a supercomputer to calculate a relatively large system, say 2000x2000?
For comparison, here's a graph of the very generic solution of T(x) = x+1 for the critical interval (-1, 0]:

[attachment=32]

And the detailed view, S(T(x))-x:

[attachment=33]

As you can see, even this generic, low-tech solution is accurate to within plus or minus 1%. In fact, the error in my solution is only about 10 times smaller. So perhaps not so good after all.

And in this case, the cyclic curve is clearly not a sine curve.
Pages: 1 2 3