# Tetration Forum

Full Version: Status of proofs
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2
Henryk brings up the issue of proofs being needed for extending the definition of tetration to the real numbers. I am interested to hear what people consider needs to be proven to establish tetration for real and complex numbers. A number of people have asserted that they have extended tetration. I worry that this is a race without a commonly accepted finish line.
Daniel Wrote:I am interested to hear what people consider needs to be proven to establish tetration for real and complex numbers.

Well, for me the following assertions need to be proved.
1. Do the coefficients in Andrew's solution converge?
2. Do the series made up of this coefficients converge? What is the convergence radius?
3. Does the natural development at every point define the same analytic function? Is the piecewise defined function analytic?
4. Is Andrew's solution for bases$b\le e^{1/e}$ identical with the solution obtained by regular development at the lower fixed point?
5. Is the solution analytic with respect to the base as argument (especially at the base $b=e^{1/e}$)?
I think each question can be answered with a yes (and there are strong numerical hints for it), however each proof is at least not obvious.
I call it Andrew's solution merely to give it a common name, though the idea was published before him by Peter Walker.
Apart from that I also would be interested how Jay's solution behaves for bases $1 and whether it is analytic and generally how it compares with Andrew's solution. Jay's solution seems to never be considered before.
[edit 25.8, minor textual corrections]

Hmm, my thoughts may be amateurish, but I've time for a bit of discussion of it currently.

That, what I called "my matrix method" is nothing else than a concise notation for the manipulations of exponential-
series, which constitute the iterates of x, s^x, s^s^x .

The integer-iterates.

It comes out,that the coefficients for the series, which has to be summed to get the value for
$\hspace{24} t_s^{(m)}\left(x\right) = s^{s^{\dots^s^x}}
\text{with m-fold occurence of } s$

have a complicated structure, but which is anyway iterative with a simple scheme.

It is a multisum, difficult to write, so I indicate it for the first three iterates m=1,m=2,m=3

m=1:
$\hspace{24} T^{\tiny(1)}_s(x)= \sum_{k=0}^{\infty} x^k * \frac{log(s)^k}{k!}$

m=2:
$\hspace{24} T^{\tiny(2)}_s(x)=
\sum_{k_2=0}^{\infty}
\sum_{k_1=0}^{\infty} x^{k_2} * k_1^{k_2}
* \begin{pmatrix} k_1+k_2 \\ k_2 \end{pmatrix}
* \frac{ log(s)^{ k_1+k_2}}{(k_1+k_2)!}
$

m=3:
$\hspace{24} T^{\tiny(3)}_s(x)=
\sum_{k_3=0}^{\infty}
\sum_{k_2=0}^{\infty}
\sum_{k_1=0}^{\infty}
x^{k_3}
* k_2^{k_3} k_1^{k_2}
* \begin{pmatrix} k_1+k_2 +k_3 \\ k_2,k_3 \end{pmatrix}
* \frac{ log(s)^{ k_1+k_2+k_3}}{(k_1+k_2+k_3)!}
$

In m>2 the binomial-term expands to a multinomial-term.

When denoting
$\hspace{24} n = k_1 + k_2 + ... + k_m$
then the notation for the general case m>1 can be shortened:

m=arbitrary, integer>0:
$\hspace{24} T^{\tiny(m)}_s(x)=
\sum_{ k_m...k_1 =0..\infty \\ k_1+k_2+...+k_m=n }
x^{k_m}
* \left( k_{m-1}^{k_m} \dots k_2^{k_3} k_1^{k_2} \right)
* \begin{pmatrix} n \\ k_2,k_3,...,k_m \end{pmatrix}
* \frac{ log(s)^n}{n!}
$

(hope I didn't make an index-error).
This formula can also be found in wrong reference removed (update)

Convergence:
Because we know, that powertowers of finite height are computable by
exponential-series we know also, that for the finite case this sum-
expression converges.
This is due the weight, that the factorials in the denominators accumulate.

Bound for base-parameter s:
However, in the case of infinite height we already know,
that log(s) is limited to -e < log(s) < 1/e

Differentability:
In the general formula above we find the parameter for s as n'th power
of its logarithm (divided by the n'th factorial). This simply provides
infinite differentiablity with respect to s
Differentiation with respect to the height-parameter m could be described
when the eigensystem-decomposition is used (see below)
completely - I kept myself off this discussion due to lack
of experience of matrix-differentiation :-( )

In all, I think, that this description is undoubtedly the unique definition
for the tetration of integer heights in terms of powerseries in s and x.

------------------------------------------------

For the continuous case my idea was, that it may be determined
either by eigenvalue-decomposition of the involved matrices, or,
where this is impossible due to inadmissible parameters s,
by the matrix-logarithm.

The eigenvalue-decomposition has the diasadvantage, that its
properties are not yet analytically known, while matrix-logarithm
is sometimes an option, if the eigenvalue-decomposition
fails since it seems, that the resulting coefficients could be
easier be determined.
An example is the U-iteration U_s^(m)(x) = s^U_s^(m-1)(x) - 1,
where s=e, which we discuss here as "exp(s)-1" - version. For this
case Daniel already provided an array of coefficients for a series
which allows to determine the U-tetration U_s^(m)(x) for fractional,
real and even complex parameters m.
These coefficients seem to be analytically derived/derivable, only
the generating scheme was not documentd. However I provided a method
how to produce these coefficients for finitely, but arbitrary many
terms of the series for U_s^(m)(x)) by symbolic exponentiation of
the parametrized matrix-logarithm, so at least that is a *basis* for
the analysis of convergence and summability of the occuring series.

Except for such borderline-cases I would prefer the definition of
the continuous tetration based on the eigenanalysis of the involved
matrix Bs. From the convergent cases -e <log(s) <1/e we have good
numerical evidence, that the eigenvalues are the powerseries of log(h(s)),
where h(s) is Ioannis Galidakis' h-function.
If that hypothesis can be verified, then differentiation with respect
to the height-parameter m means then only analysis with respect to
the fact, that m occurs as power in the exponent of the set of eigenvalues,
and that this is then also infinitely often differentiable.

The structure of the eigenvector-matrices are also not yet analytically known,
although the numerical approximations are already usable, so that the
approximations for real or complex parameters m (for s in the admissible
range) can simply be done based on the real or complex powers of the
set of eigenvalues.
Well, I found already the structure of two eigenvectors - they are
solutions of

[ edited 2.time ]
V(x)~ * Bs = V(x)~ * 1 // where V(x) is of the type [1,x,x^2,x^3,...]

A(x)~ * Bs = A(x)~ * eigenvalue_2 // where A(x) is of the type [0,1*x,2*x^2,3*x^3,...]
A(x) may be written as A(x) = x * V(x)' = x * dV(x)/dx so
V(x)'~ * Bs = V(x)'~ (w.r. to x)

X ~ * Bs = X ~ * eigenvalue_k // structure of X unknown yet

and as such the eigenvector to eigenvalue_1=1 seems simply to reflect
the concept of "fixpoints". (fixpoint for the eigenvalue_k=1).
For s= sqrt(2) we have interestingly 2 possible eigenvectors for the same eigenvalue 1
(which actually occurs only once)

V(2)~ * Bs = V(2)~ * 1
V(4)~ * Bs = V(4)~ * 1

so this is another point for further discussion...
The eigenvector for the second eigenvalue has a simple structure,
but for higher indexes I've not been successful yet.

So far my opinions/thoughts

Gottfried
Great feedback, but I'm asking about proofs for an arbitrary approach. For example, I have my own approach and don't want to have to prove that my approach is equivant to someone elses. An exception is something like -

R. Aldrovandi and L. P. Freitas,
Continuous iteration of dynamical maps,
J. Math. Phys. 39, 5324 (199

where I have experimentally results indicating that my work is consistent with Aldrovandi and Freitas's work.
bo198214 Wrote:Is Andrew's solution for bases$b\le e^{1/e}$ identical with the solution obtained by regular development at the lower fixed point?
Can you clarify this comment please?
Daniel Wrote:
bo198214 Wrote:Is Andrew's solution for bases$b\le e^{1/e}$ identical with the solution obtained by regular development at the lower fixed point?
Can you clarify this comment please?

If I correctly understood you then your approach was to develop the function $b^x=\exp_b(x)$ at a fixed point and then regularly iterate the function there $\exp_b^{\circ t}(x)$ and then define tetration as ${}^tb=\exp_b^{\circ t}(1)$. It quite looks (as I numerically verified in this post) as if this yields the same result as Andrew's approach (of course only for bases $b\le\eta=e^{1/e}$ because only there exists a real fixed point at all. The solution can also only applied to the lower fixed point (of the two fixed points of $b^x$ for $b<\eta$), because the other fixed point is not reachable from 1.)
So it indeed seems that Gottfried's method also leads to the same result as Andrew's approach. So I add to the list of lacking proofs

Let $E_b$ be the power derivation matrix of $x\to b^x$ at 0, show that $E_b^t := \sum_{n=0}^\infty \left(t\\n\right) (E_b-I)^n$ exists (for each $b>1$ and each mxm truncation of $E_b$) and that the first row are the coefficients of a powerseries $e_b^t$ such that $\text{slog}_b(e_b^t(1))=t$ for Andrew's slog.

An easier step into that direction could be to show that $e_b^t(x)$ is equal to the regular iteration at the lower fixed point of $b^x$ for $1.
Well, I have given this a lot of thought, and I believe that it is easier to prove real-analytic tetration than complex-analytic tetration. The reason for this is that the real-valued tetration has a smaller domain than complex-valued tetration. The first realization is that tetration over real numbers can produce complex numbers. After this realization we can eliminate a great deal of the domain over which real-analycity must fail (because it is not continuous, and if its not continuous it can't be analytic). To show what I mean by this I have included a color-coded plot of the log(abs(b^^x)) where gray is a finite real number, blue is a complex number output, and red is indeterminate.

I have also included a pretty-version of the domain where the circles indicate indeterminate outputs. The dark-gray quarter-plane is that largest domain over which real-analytic tetration can be defined. The medium-gray are regions which have real outputs, but the dotted line indicates a discontinuity, so this would make real-analycity fail if this were in the domain of real-tetration. The light-gray region is probably not real-valued, but it is real-valued with a first-approximation tetraiton (linear critical).

A mathematical definition of the dark-gray domain is:

$
D = \left\{ (b, x) \text{ where } b > 0 \text{ and } \begin{cases}
x > -1 & \text{ if } b = 1, \\
x > -2 & \text{ otherwise}.
\end{cases}
\right\}
$

So to summarize, I believe that what needs to be proven is that Tetration whose domain is the dark-gray region given above is real-analytic in both b and x. Once it is proven that real-analytic tetration exists over this domain, then we can worry about its uniqueness.

Andrew Robbins
Daniel Wrote:Great feedback, but I'm asking about proofs for an arbitrary approach. For example, I have my own approach and don't want to have to prove that my approach is equivant to someone elses. An exception is something like -

R. Aldrovandi and L. P. Freitas,
Continuous iteration of dynamical maps,
J. Math. Phys. 39, 5324 (199

where I have experimentally results indicating that my work is consistent with Aldrovandi and Freitas's work.

Daniel -

that's exactly what I felt for my case, too. Luckily Henryk established some connections for my approach, so I feel more homely now. I just read into your pages again, but still unfortunately I could not follow your derivations in detail. If some more examples, actual examples how you did compute the terms of your series etc were there, it would be easier for me to see, whether -at a certain stage of reading- I'm still on the right track with understanding. It's a sort of redundancy, that I need (and what I'm missing in many articles: often they go into medias res very fast and use a very concise notation with a lot of notational implications where I cannot judge, whether or not I've already lost the line - so I decide to switch to the "skimming" mode of reading after such a point, at best )
So a complete example of how you compute your results would help to understand also the rationale behind - at least for me (but well, this doesn't say much, I'm surely not the most relevant case for this, and my difficulties are then not the most representative...).

Just a short feedback -

Gottfried
Pages: 1 2