# Tetration Forum

You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Note: the following is a pretty long statement which I also want to post in sci.math, so please forgive that I did not put it into Tex-format

For the computation of tetration to fractional heights(iterates) I employ the diagonalization of operator matrices. This implements well-known manipulation of the coefficients of formal powerseries; in fact if the base b for tetration is b=exp(exp(-1)) this can be done by matrix-logarithm and if 1<b<exp(exp(-1)) we can directly apply diagonalization.

But because of the notorious difference of solutions, when the series are developed around different fixpoints, I'm still not confident, that this method is the final/the best solution.

In an earlier article I discussed a simple interpolation-approach, intended as a replacement for the diagonalization for difficult (for instance: complex) bases and instead found, that this agrees with the diagonalization down to the level of identity of the coefficients of the occuring powerseries.

This interpolation follows the common idea of polynomial interpolation resp. its generalization to the case of infinite order/ of powerseries, or of the use of (finite case) vandermonde-matrix. Here factors like (x-1),(x^2-1),(x^3-1) etc occur typically and essentially in numerators and denominators.

I had earlier brought the "false logarithmic series" to the readers' attention (see note below and this link) and this time I tried that interpolation-technique to the problem of re-engineering the series for the logarithm, and see, whether we get the correct series.

Now let's say better: "a" logarithm, because we find, that this interpolation gets correct results at integer arguments but systematically wrong results at fractional arguments - thus reflecting the observation in tetration, where the different fixpoints give identical results at integer and differing results at fractional heights, and so the fractional height are not reflected optimally with *any* such series developed around some fixpoint.

Let's look at a simple example for "a false logarithmic series".

We want to find a powerseries for the logarithm to base 2; such that with this series at parameter x we find the base-2-logarithm of x. Propose this with the initially unknown coefficients a,b,c,d,...
Code:
´ log_2(x) = a + bx + cx^2 + dx^3 + ...    // unknown coefficients a,b,c to be determined
and let's approach this problem stepwise from finite polynomials to a final generalization to a powerseries.

First we may set up a set of equations to find the unknown coefficients a,b,c,d for a cubic polynomial.
We write
Code:
´ x=2^0:   a + b 2^0 + c (2^0)^2 + d (2^0)^3  = 0   x=2^1:   a + b 2^1 + c (2^1)^2 + d (2^1)^3  = 1   x=2^2:   a + b 2^2 + c (2^2)^2 + d (2^2)^3  = 2   x=2^3:   a + b 2^3 + c (2^3)^2 + d (2^3)^3  = 3
and solve by the vandermonde-method. Let's write this as matrix-equation

First we write the matrix of coefficients VV_3 (index 3 for dimension)
Code:
´  VV_3 = [1  1   1    1]            1  2   4    8            1  4  16   64            1  8  64  512     C  = columnvector[a,b,c,d]. the C-oefficients               L  = columnvector[0,1,2,3], the L-og values

and VV_3 * C = L
and solve C = VV_3^-1 * L

We get a polynomial in x
Code:
´  f_3(x) = -31/21 + 7/4*x - 7/24*x^2 + 1/56*x^3
For the integer exponents we get
Code:
´ f_3(2^0) = 0   = log_2(1)   f_3(2^1) = 1   = log_2(2)   f_3(2^2) = 2   = log_2(4)   f_3(2^3) = 3   = log_2(8)
and for the interpolation to some fractional exponent we get, for instance
Code:
´  f_3(2^0.5) = 0.465857551857   =/= 0.5 = log_2(2^0.5)

From the computation-scheme it is obvious, how this can be generalized to higher order poylnomials and higher order approximates. However, we do not have an approximating procedure to the true logarithms at fractional exponents, however high the dimension (and thus the order of the polynomials) are.
Code:
´ f_12(2^0.5) = 0.473784748806   f_24(2^0.5) = 0.473811031008   f_48(2^0.5) = 0.473811037422   f_96(2^0.5) = 0.473811037422
with a deviation from the correct value of about

f(2^0.5) = 0.5 - 0.0261889625777

The series, as dimension/order goes to infinity, approximates to
Code:
´ lim n->oo  f_n(x) = -1.60669515242 +  2*x - 4/9*x^2 + 8/147*x^3 - 16/4725*x^4 + 32/302715*x^5 + O(x^6)
which will give correct results for natural exponents but will be false with fractional exponents.

The method of interpolation is using the paradigm of polynomial interpolation, which even if generalized to infinite order of polynomials will remain to give false results for fractional exponents.
The matrix-method for tetration employs either directly the same interpolation-method (see my discussion on "exponential polynomial interpolation", an ugly term, but I did not find a better one) or in an obscured way (we can express an identity between diagonalization and this interpolation-method).

So -possibly- the same way as we need a move from this interpolation-paradigm to arrive at a meaningful series for logarithm, we need a move to arrive at a more meaningful interpolation for fractional tetration.

What do you think?

Gottfried Helms

Note: The original idea of the "false logarithm" was triggered by an article "How Euler did it - a false logarithm series" of Ed Sandifer in MAA-online, where he introduced to a similar analysis discussed by L.Euler
http://www.maa.org/editorial/euler/How%2...series.pdf
I didn't check the actual relation between the interpolation-method here and the Euler-series, but I note that the first coefficient -1.60669... occurs also in that article.
For the Euler-paper see:
Eneström-index E190. "Consideratio quarumdam serierum quae singularibus proprietatibus sunt praeditae"
(“Consideration of some series which are distinguished by special properties”).
The interpolation you describe is what Ansus also uses as Newton/Lagrange interpolation. (The interpolation polynomial is uniquely determined whether you use Newton or Lagrange formula or calculate it directly as solving the matrix equation like you do.)

I showed that this is interpolation (which only works for b<=e^(1/e)) is equivalent to the regular iteration at the lower (attracting) fixed point.

In so far the interpolation is "correct".

There are two problems that occur in your description with the logarithm.
1. the logarithm has no powerseries at 0 (perhaps better try to interpolate log(x+1))
2. the interpolation method seems only to converge if $f^{[n]}$ tends to a limit. Which the logarithm does not satisfy.
(09/16/2009, 11:32 AM)bo198214 Wrote: [ -> ]The interpolation you describe is what Ansus also uses as Newton/Lagrange interpolation. (The interpolation polynomial is uniquely determined whether you use Newton or Lagrange formula or calculate it directly as solving the matrix equation like you do.)

I showed that this is interpolation (which only works for b<=e^(1/e)) is equivalent to the regular iteration at the lower (attracting) fixed point.

yes, and the wider our "unification-" project reaches, the more methods will be adressed by this consideration...

Quote:In so far the interpolation is "correct".

Yes, may be my wording is not explanative enough. I didn't want to say it were not correct in so far - the "false" logarithm series is also "so far correct" in its own set of conditions/requirements, it is completely legitimate in that setting: just in terms of an interpolation of some function in x. (we had this argument often: interpolation is only unique modulo some 1-periodic function)

But only in the wider view that set of conditions/requirements is not useful for the original intention, which was laid on the conceptually wider idea of a *logarithm*, namely to make log(a*b) = log(a) + log(b) and something more.
As we get by regular iteration some correct interpolated results, they satisfy many conditions correctly - but as I also understood our search for uniqueness, there is something more, for which the described method of interpolation is not sufficient. (I hoped Dmitriis integral approach would answer to such wider conditions, perhaps you both get it working soon)

Quote:There are two problems that occur in your description with the logarithm.
1. the logarithm has no powerseries at 0 (perhaps better try to interpolate log(x+1))

Quote:2. the interpolation method seems only to converge if $f^{[n]}$ tends to a limit. Which the logarithm does not satisfy.

Hmmm. How far can this argument be related to that interpolation-method in more generality?
understand me correctly: I'm not on a search for a new interpolation-method for the logarithm, but I want to make things explicite down to the last "quark" . Then i am hoping to find a key, how such a method could be adapted/corrected/transformed to lead to the "correct" result ("correct" here in the sense for the wider scene) for instance in the easier example (with logarithm) and then see, what would such a correction/ transformation do with our regular iteration.

Hmm. I hope I did not create more obfuscation than clearing...

[update] ... and that I'm not only on some spleen idea... [/update]

Gottfried
(09/16/2009, 11:32 AM)bo198214 Wrote: [ -> ]There are two problems that occur in your description with the logarithm.
1. the logarithm has no powerseries at 0 (perhaps better try to interpolate log(x+1))

I've got
Code:
f_32(x) = 1.26135017559*x - 0.300463998555*x^2 + 0.0419013176151*x^3 - 0.00288201496505*x^4 + 0.0000960702658021*x^5 + O(x^6)
Don't see anything in the structure of the coefficients (rational numbers, multiples of log(2) and the like)

Gottfried
(09/16/2009, 01:59 PM)Gottfried Wrote: [ -> ]
Quote:In so far the interpolation is "correct".

Yes, may be my wording is not explanative enough. I didn't want to say it were not correct in so far - the "false" logarithm series is also "so far correct" in its own set of conditions/requirements, it is completely legitimate in that setting: just in terms of an interpolation of some function in x. (we had this argument often: interpolation is only unique modulo some 1-periodic function)

But only in the wider view that set of conditions/requirements is not useful for the original intention, which was laid on the conceptually wider idea of a *logarithm*, namely to make log(a*b) = log(a) + log(b) and something more.

With "correct" I meant here satisfaction of the superfunction equation: $f(x+1)=b^{f(x)}$. Indeed it is a surprise as I wouldnt expect satisfaction of this equation from some interpolation. But of course it does not suffice for uniqueness, while your logarithmic equation does.

Quote:
Quote:2. the interpolation method seems only to converge if $f^{[n]}$ tends to a limit. Which the logarithm does not satisfy.

Hmmm. How far can this argument be related to that interpolation-method in more generality?

My understanding is the following: You know there is a theorem that an analytic function is uniquely determined by the values of infinitely many arguments that have an accumulation point in the domain of holomorphy.

In our case of $\exp_b^{[n]}$ having a limit, i.e. $1, I guess the regular super-exponential is analytic at $\infty$. So under this condition the sequence of natural numbers as arguments has the limit point $\infty$ inside the domain of holomorphism, and hence is uniquely determined by the values on the natural numbers.

The difficult case is where it is not analytic at infinity. However all methods seem to converge to the same thing in this case. These method are all non-interpolative. Here it seems that approaching infinity in a certain angle/sector is bounded. Kinda analyticity not in a whole vicinity of infinity but only when approaching through this sector.