Hi friends - I'm at a point to ask: "Where does the "matrix-method" stay?", so I'll take the occasion to provide a wider resume.

I'll use the two definitions for the related iterations, where I like the Big-T-notation, and also use Big-U:

Also I call 1/e^e < b < e^(1/e) the "safe range" for b in the following.

------------- Integer tetration --------------------------------

(I assume my matrix-notation as known here, but I switched to use the here more common parameter b for the base, where I earlier had used s)

Implementation of T(b,x,1) , value in the second column of the result

The matrix approach is just an elementary reformulation here. The needed coefficients for this transformation for x --> b^x or T(b,x,0)-->T(b,x,1) are the coefficients of the appropriate exponential series and are just collected in a matrix-scheme.

This is obviously iterable,

and since the result is finite, the involved coefficients, even if seen as matrix multiplications, should be well defined.

But I think, it must also formally be proven, that this "collection of coefficients" do in fact behave as a "matrix", so the rules for integer powers and other matrix-operations are applicable.

For the easier version U() one finds a discussion of this for instance in Aldrovani/Freitas [AF], which refers to the triangular form of the required operator-matrix; the useful property of row-finiteness is also adressed for instance in Berkolaiko[B], who proves the existence of the matrix-operator for any similar transformation.

I'm not sure, whether I can use Berkolaiko's arguments for square-matrices (and thus the original tetration-iteration T()) as well, so I must leave this open, "whether the collection of coefficients can indeed be used as matrix"

But the numerical results, which are always approximations based on the finite truncation, suggest that integer iteration and matrix-powers are interchangeable also for the T()-and U()-transformation:

a) numerical results by iteration and by matrix-powers coincide.

b) linear combinations of different V(x)- vectors result in linear combinations of the according V(y)-vectors as expected

c) infinite sums of various V(x) -vectors give expected results(tetra-geometric series) For a subset of parameters the result can be crosschecked by conventional scalar evaluation (possibly Euler-summation required) and agrees with these results.

d) even infinite sums of powers of Bb give results which are compatible with conventional scalar evaluation when the parameter b is in the safe range, and this can also be extended for parameters b even outside this range (b>e^(1/b)) (analytical arguments, which back that observation are based on the eigensystem-hypothesis, see below)

------ continuous tetration ------------------------

The continuous version depends then completely on the possibility to interprete the collection of coefficients used in the integer version in fact as a matrix, including the option to take matrix-logarithms and diagonalization (eigensystem-decomposition), since we need the concept of fractional iteration and thus of fractional matrix-powers.

Numerically

Again the numerical results agree with the expected results for the safe range of the base-parameter b and manageable height-parameters h.

Already numerical approximations for the matrix-logarithm and for the diagonalization using finite dimensions up to dim=32 and dim=64 show the expected behaviour in a certain range of the parameters, although the empirical eigenvalues have unknown degree of approximation to the "true" eigenvalues, which are unknown.

Anyway, approximative results can be found even for some b outside the "safe range" (if not too far) and h not too high.

Just use your favorite Eigen-solver and apply fractional powers...

Analytically

Based on a hypothese about the structure of the eigenvalues a formal and analytical solution for the eigenvectors was found for parameter b in the "safe range". Again, the results for these parameters agree with the expected values when approximated by scalar operations, and fractional iterates were computed for some examples.

It was also possible to apply the hypothesis about the eigen-system for values b>e^(1/e) when the required parameters were taken from the set of complex fixpoints for that b-parameters. The matrices for some example parameters could be reproduced perfectly, for instance for b=3 and b=7, h=1, which matched the matrices given by the simple integer-approach.

Also integer powers were correctly reproduced.

Still one problem: fractional powers for b>eta

A problem occured with non-integer powers here. The fractional powers of matrices, when constructed based on the eigensystem hypothese, were different from the matrices, which were computed by a numerical eigensystem-solver. Possibly the results are just complex rotations of each other (but I didn't confirm this yet), or the hypothese must be modified to use complex conjugates or the like.

--------------------------------------------------------

The formula for computation of the eigensystem makes use of the following decomposition. Let D be the diagonalmatrix containing the eigenvalues, dV(u), W the matrix of eigenvectors such that

then a further decomposition, where b = t^(1/t), t possibly complex,

and u = log(t) leads to the full decomposition

where arbitrary fractional or complex powers of Bb are expressed by that powers of the scalar elements of the diagonal eigenvalue-matrix

Here X and P are triangular (and thus invertible), X, dV(t) and dV(u) are depending on t, but only dV(u) is modified by the tetration-height-parameter h, such that we must insert D^h = dV(u^h) .

The coefficients in X are finite polynomials in t and u, having also denominators of products of (1-u),(1-u^2),(1-u^3),... which indicates singularities inherent to this method (see also Daniel's recent post, I'll add the reference later, I'm writing this in a notepad without the http:-references at hand).

While all coefficients of the individual matrices are then finitely computable, the analytical computation of W^-1 still involves evaluation of infinite series, because P^-1~ is not row-finite (but may for numerical computation simply be approximated by numerical inversion of W)

This means in practice, that there will be concurring methods for the numerical evaluation of the complete matrix-product, in the sense of using the best order for the evaluation by exploitation of the associativity of the matrix-products.

-----------------------------------------

So everything is still based on heuristics and hypotheses, which should be attempted to be proven now to close the case.

For me it is now nearly without doubt, that this method -for its implicite and underlying definition of tetration- will come out as a formal coherent/consistent method, at least as the formal skeleton/description. But -besides the need of formal proofs- I have still two problems:

1) the described matrix-approach allows dimensions of 24,32,64 and thus as few terms for the final series. This is way too few for a general implementation, even for tests of approximations this is often too few. Jay announced series with up to 700 terms - so since this seems to be possible the method should be reviewed in this aspect.

2) fractional powers for the analytically and eigensystem-constructed Bb-matrices, for b>eta.

I could exploratively try to find, what's going wrong with the result and to find a remedy this way. But I would like to find first a hypothesis for the reason (and the remedy) of this problem. It surely has to do with the different branches of complex logarithms, but I don't see in which way this is precisely involved in the current case. The integer powers come out fine...

The U()-problem seems to be simpler than the T()-problem, since we deal only with triangular matrices with a more obvious eigensystem. Possibly the "closing of the case" will be easier done if first attempted via the analysis of the U()-transformation.

Anyway, if it cannot be done in near future I think I'm taking a longer break - I'm already feeling a bit exhausted and tired of the subject.

Gottfried

[AF]

CONTINUOUS ITERATION OF DYNAMICAL MAPS#

R. Aldrovandi and L.P. Freitas

physics/9712026 16 Dec 1997

[B]Analysis of Carleman Representation of Analytical Recursions

G. Berkolaiko

JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS 224, 81-90 1998.

ARTICLE NO. AY985986

I'll use the two definitions for the related iterations, where I like the Big-T-notation, and also use Big-U:

Code:

`.`

T(b,x,h) = b^T(b,x,h-1) T(b,x,0) = x

U(b,x,h) = b^U(b,x,h-1)-1 U(b,x,0) = x

Also I call 1/e^e < b < e^(1/e) the "safe range" for b in the following.

------------- Integer tetration --------------------------------

(I assume my matrix-notation as known here, but I switched to use the here more common parameter b for the base, where I earlier had used s)

Implementation of T(b,x,1) , value in the second column of the result

Code:

`.`

V(x)~ * dV(log(b))* B = V(y)~ = V(b^x)~

The matrix approach is just an elementary reformulation here. The needed coefficients for this transformation for x --> b^x or T(b,x,0)-->T(b,x,1) are the coefficients of the appropriate exponential series and are just collected in a matrix-scheme.

This is obviously iterable,

Code:

`.`

V(x)~ * dV(log(b))* B = V(b^x)~

V(b^x)~ * dV(log(b))* B = V(b^b^x)~

Code:

`.`

Bb = dV(log(b))*B

V(x)~ * Bb^h = V({b,x}^^h)~

For the easier version U() one finds a discussion of this for instance in Aldrovani/Freitas [AF], which refers to the triangular form of the required operator-matrix; the useful property of row-finiteness is also adressed for instance in Berkolaiko[B], who proves the existence of the matrix-operator for any similar transformation.

I'm not sure, whether I can use Berkolaiko's arguments for square-matrices (and thus the original tetration-iteration T()) as well, so I must leave this open, "whether the collection of coefficients can indeed be used as matrix"

But the numerical results, which are always approximations based on the finite truncation, suggest that integer iteration and matrix-powers are interchangeable also for the T()-and U()-transformation:

a) numerical results by iteration and by matrix-powers coincide.

b) linear combinations of different V(x)- vectors result in linear combinations of the according V(y)-vectors as expected

c) infinite sums of various V(x) -vectors give expected results(tetra-geometric series) For a subset of parameters the result can be crosschecked by conventional scalar evaluation (possibly Euler-summation required) and agrees with these results.

d) even infinite sums of powers of Bb give results which are compatible with conventional scalar evaluation when the parameter b is in the safe range, and this can also be extended for parameters b even outside this range (b>e^(1/b)) (analytical arguments, which back that observation are based on the eigensystem-hypothesis, see below)

------ continuous tetration ------------------------

The continuous version depends then completely on the possibility to interprete the collection of coefficients used in the integer version in fact as a matrix, including the option to take matrix-logarithms and diagonalization (eigensystem-decomposition), since we need the concept of fractional iteration and thus of fractional matrix-powers.

Numerically

Again the numerical results agree with the expected results for the safe range of the base-parameter b and manageable height-parameters h.

Already numerical approximations for the matrix-logarithm and for the diagonalization using finite dimensions up to dim=32 and dim=64 show the expected behaviour in a certain range of the parameters, although the empirical eigenvalues have unknown degree of approximation to the "true" eigenvalues, which are unknown.

Anyway, approximative results can be found even for some b outside the "safe range" (if not too far) and h not too high.

Just use your favorite Eigen-solver and apply fractional powers...

Analytically

Based on a hypothese about the structure of the eigenvalues a formal and analytical solution for the eigenvectors was found for parameter b in the "safe range". Again, the results for these parameters agree with the expected values when approximated by scalar operations, and fractional iterates were computed for some examples.

It was also possible to apply the hypothesis about the eigen-system for values b>e^(1/e) when the required parameters were taken from the set of complex fixpoints for that b-parameters. The matrices for some example parameters could be reproduced perfectly, for instance for b=3 and b=7, h=1, which matched the matrices given by the simple integer-approach.

Also integer powers were correctly reproduced.

Still one problem: fractional powers for b>eta

A problem occured with non-integer powers here. The fractional powers of matrices, when constructed based on the eigensystem hypothese, were different from the matrices, which were computed by a numerical eigensystem-solver. Possibly the results are just complex rotations of each other (but I didn't confirm this yet), or the hypothese must be modified to use complex conjugates or the like.

--------------------------------------------------------

The formula for computation of the eigensystem makes use of the following decomposition. Let D be the diagonalmatrix containing the eigenvalues, dV(u), W the matrix of eigenvectors such that

Code:

`Bb = W^-1 * D * W`

Code:

`W = X * P~ * dV(t)`

Code:

`Bb = (dV(t^-1) * P^-1~ * X^-1) * dV(u) * (X * P~ * dV(t))`

Code:

`D = dV(u)`

Here X and P are triangular (and thus invertible), X, dV(t) and dV(u) are depending on t, but only dV(u) is modified by the tetration-height-parameter h, such that we must insert D^h = dV(u^h) .

The coefficients in X are finite polynomials in t and u, having also denominators of products of (1-u),(1-u^2),(1-u^3),... which indicates singularities inherent to this method (see also Daniel's recent post, I'll add the reference later, I'm writing this in a notepad without the http:-references at hand).

While all coefficients of the individual matrices are then finitely computable, the analytical computation of W^-1 still involves evaluation of infinite series, because P^-1~ is not row-finite (but may for numerical computation simply be approximated by numerical inversion of W)

This means in practice, that there will be concurring methods for the numerical evaluation of the complete matrix-product, in the sense of using the best order for the evaluation by exploitation of the associativity of the matrix-products.

-----------------------------------------

So everything is still based on heuristics and hypotheses, which should be attempted to be proven now to close the case.

For me it is now nearly without doubt, that this method -for its implicite and underlying definition of tetration- will come out as a formal coherent/consistent method, at least as the formal skeleton/description. But -besides the need of formal proofs- I have still two problems:

1) the described matrix-approach allows dimensions of 24,32,64 and thus as few terms for the final series. This is way too few for a general implementation, even for tests of approximations this is often too few. Jay announced series with up to 700 terms - so since this seems to be possible the method should be reviewed in this aspect.

2) fractional powers for the analytically and eigensystem-constructed Bb-matrices, for b>eta.

I could exploratively try to find, what's going wrong with the result and to find a remedy this way. But I would like to find first a hypothesis for the reason (and the remedy) of this problem. It surely has to do with the different branches of complex logarithms, but I don't see in which way this is precisely involved in the current case. The integer powers come out fine...

The U()-problem seems to be simpler than the T()-problem, since we deal only with triangular matrices with a more obvious eigensystem. Possibly the "closing of the case" will be easier done if first attempted via the analysis of the U()-transformation.

Anyway, if it cannot be done in near future I think I'm taking a longer break - I'm already feeling a bit exhausted and tired of the subject.

Gottfried

[AF]

CONTINUOUS ITERATION OF DYNAMICAL MAPS#

R. Aldrovandi and L.P. Freitas

physics/9712026 16 Dec 1997

[B]Analysis of Carleman Representation of Analytical Recursions

G. Berkolaiko

JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS 224, 81-90 1998.

ARTICLE NO. AY985986

Gottfried Helms, Kassel