*** I just tried my first latex-code in the hope to improve readability ***

Matrix method for tetration

Well - there is not much special here.

1) -----------------------------------------------------

Assume, you denote the summation of a powerseries

as a vector-product:

then it may be useful for the further analysis, to give names

for such vectors. I say

Let the other vector be denoted as

then

Example: if

then V(x)~ * A represents simply the exponential-series in x, so

2) -------------------------------------------------------

Now, if we want iteration, to get e^(e^x) that means, that

we simply need to set e^x=y and use y as paraemter for the

"powerseries"-vector (or better: "vandermonde"-vector, thus

the letter "V" at V(x))

But what is V(y) now? It contains the powers of y, or the powers

of e^x. But by the previous formula

we only have the "first power" of e^x. How to obtain the

other powers too?

To make in in one formula, we should -instead of a single

vectorproduct- have a full matrix-product, which provides all

required powers of e^x as well, and in the same one shot.

Something like

where A1 is our already known vector A.

A2, for instance, is then simply

since

and written as a vector-product it is

This is completely analoguous for all powers of (e^x).

So

Call this collection of vectors B

3) ----------------------------------------------------------

If we consider the collection of all A_k into the matrix B,

then we may also extract the common factorial denominators

into a diagonal "factorial"-vector F

then

so we have

and for obvious reasons I denote the above collection of V-vectors

of subsequent parameters VZ so we have:

and for more brevity I call the product F-1 * VZ

and we have the iterable matrix-operator:

4) -------------------------------------------------------------

Introducing another diagonal Vandermonde-vector, with powers

of logs of a parameter s parametrizes this for s and I call the

resulting matrix Bs:

and

the iterable matrix-expression for obtaining

I call the matrix B a "constant", since it is independent

of the parameter s.

====================================================================

The rationale in short:

. We need a powerseries-vector in x and a exponential-series-vector

. to get one scalar result for s^x.

.

. But to make it iterable the result should again be not a scalar

. alone, but again a full vector of powers of s^x, we need not only

. one exponential-series-vector, but one for each power, thus a

. full matrix of such vectors.

.

. That matrix is B (resp. Bs) in the tetration case.

-------------------

Once having introduced a matrix as an operator for tetration

(the analoguous is valid for other iterated operations and

functions as well, just take the appropriate matrices), we

are in a situation to discuss iterations in terms of powers

of Bs, and if Bs has an accessible eigensystem, we are also

in the situation to define the fractional iteration by fractional

powers and even complex powers of Bs, thus the continuous

version of tetration.

In my first heuristic approach I used the matrix-logarithm

of Bs instead, and defined arbitrary powers by

I found that this provides also numerical stable approximates

(with the same results, of course) but didn't investigate this

path deeper, since I thought, that the eigensystem-decomposition

is a more general or fundamental approach.

-------------------

Hope I made it readable/understandable...

Gottfried

Matrix method for tetration

Well - there is not much special here.

1) -----------------------------------------------------

Assume, you denote the summation of a powerseries

as a vector-product:

then it may be useful for the further analysis, to give names

for such vectors. I say

Let the other vector be denoted as

then

Example: if

then V(x)~ * A represents simply the exponential-series in x, so

2) -------------------------------------------------------

Now, if we want iteration, to get e^(e^x) that means, that

we simply need to set e^x=y and use y as paraemter for the

"powerseries"-vector (or better: "vandermonde"-vector, thus

the letter "V" at V(x))

But what is V(y) now? It contains the powers of y, or the powers

of e^x. But by the previous formula

we only have the "first power" of e^x. How to obtain the

other powers too?

To make in in one formula, we should -instead of a single

vectorproduct- have a full matrix-product, which provides all

required powers of e^x as well, and in the same one shot.

Something like

Code:

`.`

V(y)~ = V(x)~ [A0, A1, A2, A3 , ... ]

= [1 , e^x , (e^x)^2, (e^x)^3, ... ]

where A1 is our already known vector A.

A2, for instance, is then simply

since

Code:

`. `

(e^x)^2 = e^(2x) = sum (k=0..inf) (2x)^k/k!

= sum (k=0..inf) 2^k * x^k / k!

= sum (k=0..inf) 2^k/k! * x^k

This is completely analoguous for all powers of (e^x).

So

Code:

`.`

A0 = colvector(0^0/0!, 0^1/1!, 0^2/2!,...)

A1 = colvector(1^0/0!, 1^1/1!, 1^2/2!,...)

A2 = colvector(2^0/0!, 2^1/1!, 2^2/2!,...)

A3 = colvector(3^0/0!, 3^1/1!, 3^2/2!,...)

...

A_k= colvector(k^0/0!, k^1/1!, k^2/2!,...)

Call this collection of vectors B

3) ----------------------------------------------------------

If we consider the collection of all A_k into the matrix B,

then we may also extract the common factorial denominators

into a diagonal "factorial"-vector F

Code:

`. `

F = diagonal( 0! , 1! , 2! , 3! ,... )

F^-1 = diagonal(1/0!,1/1!,1/2!,1/3!,... )

then

Code:

`.`

A0 = F^-1 * V(0)

A1 = F^-1 * V(1)

A2 = F^-1 * V(2)

A3 = F^-1 * V(3)

...

A_k = F^-1 * V(k)

...

so we have

Code:

`.`

V(y)~ = V(x) * F^-1 * [V(0), V(1), V(2) , V(3) , ....]

= [1 , e^x , (e^x)^2, (e^x)^3, ... ]

and for obvious reasons I denote the above collection of V-vectors

of subsequent parameters VZ so we have:

Code:

`.`

V(y)~ = V(x) * F^-1 * VZ

= V(e^x)~

and for more brevity I call the product F-1 * VZ

and we have the iterable matrix-operator:

4) -------------------------------------------------------------

Introducing another diagonal Vandermonde-vector, with powers

of logs of a parameter s parametrizes this for s and I call the

resulting matrix Bs:

and

the iterable matrix-expression for obtaining

I call the matrix B a "constant", since it is independent

of the parameter s.

====================================================================

The rationale in short:

. We need a powerseries-vector in x and a exponential-series-vector

. to get one scalar result for s^x.

.

. But to make it iterable the result should again be not a scalar

. alone, but again a full vector of powers of s^x, we need not only

. one exponential-series-vector, but one for each power, thus a

. full matrix of such vectors.

.

. That matrix is B (resp. Bs) in the tetration case.

-------------------

Once having introduced a matrix as an operator for tetration

(the analoguous is valid for other iterated operations and

functions as well, just take the appropriate matrices), we

are in a situation to discuss iterations in terms of powers

of Bs, and if Bs has an accessible eigensystem, we are also

in the situation to define the fractional iteration by fractional

powers and even complex powers of Bs, thus the continuous

version of tetration.

In my first heuristic approach I used the matrix-logarithm

of Bs instead, and defined arbitrary powers by

I found that this provides also numerical stable approximates

(with the same results, of course) but didn't investigate this

path deeper, since I thought, that the eigensystem-decomposition

is a more general or fundamental approach.

-------------------

Hope I made it readable/understandable...

Gottfried

Gottfried Helms, Kassel