# Tetration Forum

Full Version: Andrew Robbins' Tetration Extension
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4
I just read Andrew Robbins' solution to the tetration problem, which I find very convincing, and want to use the opportunity to present and discuss it here.

The solution ${}^y x$ satisfies the 2 natural conditions
1. ${}^1b=b$ and ${}^{x+1}b=b^{{}^xb}$
2. $x\mapsto b^x$ is infinitely differentiable.

However for avoiding difficulties with a later expansion, he instead solves for the tetration logarithm tlog (which he calls super logarithm but I find "tetration logarithm" somewhat more specific), which is the inverse of $x\mapsto b^x$, i.e. ${}^{\text{tlog}_b(x)}b=x$.

The first condition is then translated into
1'. $\text{tlog}_b b=1$ and $\text{tlog}_b(x)=\text{tlog}_b(b^x)-1$
while the second condition is equivalent to that also
2'. $x\mapsto \text{tlog}_b(x)$ is infinitely differentiable.

By 1' we merely need to consider the tlog on the interval $(0,1\]$ and can then derive the values for the other intervals $(1,b\]$, $(b,b^b\]$, $(b^b,b^{b^b}\]$ etc. above and $(-\infty,0\]$ below.

The idea is now that we define a smooth (infinitely differentiable) function t on $(0,1\]$ and ensure that the by 1' resulting function is also smooth at the joining points of the intervals. Obviously it suffices to ensure this for the joining point 0 (and by 1' this transposes to each other joining point).

We simply expand t into a power series at 0 and then try to determine the coefficients such that the resulting function is smooth at 0
$t(x) = \sum_{i=0}^\infty \nu_i \frac{x^i}{i!}$
$\nu_i$ is the i-th derivative of t at 0.
The resulting function on $(-\infty,0\]$ is
$s(x)=t(b^x)-1$

We have to ensure that $s^{(k)}(0)=t^{(k)}(0)=\nu_k$ for each $k\ge 0$. What is now the k-th derivative of s at 0?

For $k=0$ we get $\nu_0=s(0)=t(b^0)-1=t(1)-1=\text{tlog}_b(b^1)-2=\text{tlog}_b(b)-2=1-2=-1$.
For $k\ge 1$ the constant -1 vanishes and we make the following calculations:
$s^{(k)}(x)=(t(b^x))^{(k)}=\left(\sum_{i=0}^\infty \nu_i \frac{b^{xi}}{i!}\right)^{(k)}=\sum_{i=0}^\infty\frac{\nu_i}{i!}(b^{xi})^{(k)}$
The derivation of $b^{xi}$ is easily determined to be
$(b^{xi})'=b^{xi}\text{ln}(b) i$ and so the k-th derivative is $(b^{xi})^{(k)} = b^{xi}(\text{ln}(b)i)^k$, which give us in turn
$\nu_k=s^{(k)}(0)= \text{ln}(b)^k\sum_{i=0}^\infty\nu_i\frac{i^k}{i!}$ for $k\ge 1$.

This is an infinite linear equation system system in the variables $\nu_1,\nu_2,\dots$.
The way of Andrew Robbins is now to approximate a solution by considering finite linear equation systems consisting of n equations and n variables $\nu_{i,n}$ resulting from letting $\nu_{k,n}=0$ for $k>n$.
First one can show that these equation systems have a unique solution for b>1 and numerical evidence then shows that $\nu_k=\lim_{n\to\infty}\nu_{k,n}$ converges and that the resulting $\nu_k$ are a solution of the infinite equation system.

Further numerical evidence shows, that the infinite sum in the definition of t converges for the so obtained $\nu_i$.

However I would guess that the claimed uniqueness for a solution satisfying 1' and 2' is not guarantied. We can use different approximations, for example for a given constant we can consider the equation systems, resulting from letting $\nu_k=c$ for $k>n$. Because interestingly the sum $\sum_{i=0}^\infty \frac{i^k}{i!}$ converges to $eB_k$ where e is the Euler constant and B_k are the Bell numbers. So by setting $\nu_k=c$ for $k>n$ the remaining sum $\sum_{i=n+1}^\infty c\frac{i^k}{i!}$ converges and merely introduces an additive constant in the linear equation system. The obtained solutions are different from the solution obtained by c=0.
However I didnt verify yet the convergence properties of these alternative solutions.
It also interests me, whether your $C^{\infty}$ slog solution is even analytic.
This could be numerically supported by computing the at 0 developed series $t_b(x)$ (which was originally meant to be defined on $(0,1)$) at some $x\in(1,2)$ (if convergent there, what is anyway the convergence radius of your slog series?) and then compare the result with $t_b(\log_b(x))+1$. Or vice versa to compare $t_b(x)$ for some $x\in(-\infty,0)$ with $t_b(b^x)-1$. If they are all equal its quite probably analytic.
I just verified numerically that the superlog critical function (originally defined on $(0,1)$) for base $e$ satisfies
$t(x)=t(\log(x))+1$ for all $x\in(1,2)$.

So it is quite sure that the piecewise defined slog is also analytic.
Congratulation Andrew!

(However once someone has to prove this rigorously and also compute the convergence radius.)
As I just found out, the idea of Andrew was already considered
by P. L. Walker and others around 1990. I dont know yet exactly were it appeared first but he mentions it in [1]. Even Galidakis' method was mentioned there. (Though everything for base e.)

However till today there seems to be no proof of the convergence (radius) of the slog power series.

[1] P. L. Walker, Infinitely differentiable generalized logarithmic and exponential functions, Mathematics of computation, 57 (1991), No 196, 723-733.
[2] C. W. Clenshaw, D. W. Lozier, F. W. J. Olver and P. R. Turner, Generalized exponential and logarithmic functions, Comput. Math. Appl. 12B (5/6) (1986), 1091-1101.
[3] P. L. Walker, On the solutions of an Abelian functional equation, J. Math. Anal. Appl. 155 (1991), 93-110.
bo198214 Wrote:However I would guess that the claimed uniqueness for a solution satisfying 1' and 2' is not guarantied. We can use different approximations, for example for a given constant we can consider the equation systems, resulting from letting $\nu_k=c$ for $k>n$. Because interestingly the sum $\sum_{i=0}^\infty \frac{i^k}{i!}$ converges to $eB_k$ where e is the Euler constant and B_k are the Bell numbers. So by setting $\nu_k=c$ for $k>n$ the remaining sum $\sum_{i=n+1}^\infty c\frac{i^k}{i!}$ converges and merely introduces an additive constant in the linear equation system. The obtained solutions are different from the solution obtained by c=0.
However I didnt verify yet the convergence properties of these alternative solutions.

I will do some testing, but my hunch is that the series will still converge on the "correct" solution, despite your attempt to break the solution. Because the modification you made is convergent, it has no radius of convergence, and hence it does not introduce a "false" singularity. Therefore, the solutions as n grows should still converge on the correct solution (since it does have a radius of convergence, and hence the terms will be much larger in magnitude), even if that convergence is slower.

If you made a modification that introduced a false singularity, I think that could possibly break the convergence, especially if the false singularity were closer to the origin than the true singularity, i.e. gave a root test (for terms k>n) greater than abs(1/c_0).

I already have code in place to remove the "correct" singularity, and I could just modify it to instead introduce a false set of coefficients for k>n, and see what happens. As I think more about what a partial series (only terms k>n) of a singularity would look like as a function, i.e., an increasingly insignificant singularity, I'm actually going to guess that the solution will still converge on the correct one, even if only very, very slowly. (And convergence with v_k=0, k>n, is already very slow.)
jaydfox Wrote:I already have code in place to remove the "correct" singularity, and I could just modify it to instead introduce a false set of coefficients for k>n, and see what happens. As I think more about what a partial series (only terms k>n) of a singularity would look like as a function, i.e., an increasingly insignificant singularity, I'm actually going to guess that the solution will still converge on the correct one, even if only very, very slowly. (And convergence with v_k=0, k>n, is already very slow.)

Hmm, just thinking about practicalities, however, the root test of any false function must be less than 1, as is trivially obvious from the first row, but also from the fact that we're solving across an interval (0,1), so we need a root test at most 1 to reach the end of the interval. But I can try introducing false singularities just outside this radius and see what happens. I'll start with Henryk's suggestion, then try a singularity outside the correct radius of convergence, then one at the radius but at the wrong location, then one inside the correct radius (if there's even a need to go that far).

Update: I'm going to further venture to hypothesize that any series we use for k>n must have a radius of convergence greater than that of the true singularity. If it has the same radius of convergence, then that power series must converge on the location(s) of the true singularity(ies), perhaps even the form of the singularity as well (e.g., logarithmic versus simple pole).

I came to this idea as I was thinking about cyclic alterations of the correct series (e.g., $G(z) = F(z) + \frac{\sin\left(2\pi F(z)\right)}{200\pi}$). They should satisfy the Abel equation, and hence I wouldn't expect the series as n grows to change from that series to the correct one, because as n grows, we can just plug in the modified series and get a solution, and the finite matrices should have unique solutions.

On the other hand, for any singularity outside the radius of convergence, as n grows the terms would eventually become too small relative to the terms of the correct sequence, and hence they would become increasingly irrelevant. Therefore, I would still expect the series to converge on the correct sequence in those cases.
jaydfox Wrote:I will do some testing, but my hunch is that the series will still converge on the "correct" solution, despite your attempt to break the solution. Because the modification you made is convergent, it has no radius of convergence, and hence it does not introduce a "false" singularity.
I am not sure what you mean by false singularity and false function, and how you would introduce a singularity somewhere. Despite I am very interested in the results you will come up with.

I also would suggest to stop calling one solution as being "correct" (as it doesnt fit the meaning of the word), I rather would suggest to call the solution gotten by Andrew's method as being "natural".
bo198214 Wrote:
jaydfox Wrote:I will do some testing, but my hunch is that the series will still converge on the "correct" solution, despite your attempt to break the solution. Because the modification you made is convergent, it has no radius of convergence, and hence it does not introduce a "false" singularity.
I am not sure what you mean by false singularity and false function, and how you would introduce a singularity somewhere. Despite I am very interested in the results you will come up with.
Sorry, it's how I picture it in my head and hence how I describe it. If you substituted the terms for the power series of 1/(z-3) for k>n, then you will have introduced a singularity to the solution. The system you solve would NOT just consist of the n terms you calculated, but an infinite number of terms, the first n of which are determined by the solution to the system, the remaining for k>n being given as an initial condition. Therefore, it would have a singularity at z=3, a simple pole in fact. This is a "false singularity", because it was introduced artificially, i.e., it was not predicted by the convergence of the finite systems. In fact, I would predict that convergence would nonetheless be to a series with singularities at the upper and lower fixed points (for base e).

Quote:I also would suggest to stop calling one solution as being "correct" (as it doesnt fit the meaning of the word), I rather would suggest to call the solution gotten by Andrew's method as being "natural".
I call it correct in the sense that the system should converge on the same solution, regardless of whatever initial conditions we specify for k>n, so long as those initial conditions do not attempt to reduce the radius of convergence below that of the system solved with 0's for k>n. But we can call it "natural" as well.
Perhaps you are right that this kind of modification still converges to the natural solution. However there must be ways to solve the infinite equation system that yield different solutions.
What for example if we form the matrix $E_{|N}$ for which $(E_{|N}-I)^{-1}(1,0,\dots)^T=\vec{\text{slog}}_{|N}$ not by simply truncating the infinite matrix $E$ (after stripping the first column) to NxN but by some other methods of selecting the entries of a NxN square matrix.

But I am not sure in which other ways one can choose this matrix, so that the
1. The solution converges for N->oo
2. The resulting solution indeed satisfies the infinite system.

Edit: I just look a very interesting video about solutions of infinite linear equation systems. Very interesting for our case. See here, look the video, not only the slides.
I may have a proof for the convergence of the base-1 and base-infinity super-logarithms, but I'm not sure how rigorous it is. It relies on a detailed investigation of the coefficients, and I'm not prepared to formalize it yet.

Andrew Robbins
Pages: 1 2 3 4