# Tetration Forum

Full Version: Matrix Operator Method
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4
I'm surprised.
The matrix-operator-method promises for this version of tetration (x --> exp(x)-1 ) an even better behave than that of general tetration itself.
Assume the constant matrix-operator (for parameter s=exp(1))

Code:
C=   1.0000000               .             .            .           .           .           .           .           0       1.0000000             .            .           .           .           .           .           0      0.50000000     1.0000000            .           .           .           .           .           0      0.16666667     1.0000000    1.0000000           .           .           .           .           0     0.041666667    0.58333333    1.5000000   1.0000000           .           .           .           0    0.0083333333    0.25000000    1.2500000   2.0000000   1.0000000           .           .           0    0.0013888889   0.086111111   0.75000000   2.1666667   2.5000000   1.0000000           .           0   0.00019841270   0.025000000   0.35833333   1.6666667   3.3333333   3.0000000   1.0000000      ..           ... or   1       .       .       .     .     .  .  .   0       1       .       .     .     .  .  .   0     1/2       1       .     .     .  .  .   0     1/6       1       1     .     .  .  .   0    1/24    7/12     3/2     1     .  .  .   0   1/120     1/4     5/4     2     1  .  .   0   1/720  31/360     3/4  13/6   5/2  1  .   0  1/5040    1/40  43/120   5/3  10/3  3  1

which is just a factorial-scaling of the matrix of Stirling-numbers 2'nd kind:

Code:
St2=   1       .       .       .     .     .   .  .   0       1       .       .     .     .   .  .   0       1       1       .     .     .   .  .   0       1       3       1     .     .   .  .   0       1       7       6     1     .   .  .   0       1      15      25    10     1   .  .   0       1      31      90    65    15   1  .   0       1      63     301   350   140  21  1

Now parametrize C with s, by premultiplying it with the diagonalmatrix
of powers of log(s)

Code:
Cs = diag(1,log(s),log(s)^2,...) * C

Denote the entries of the second column of Cs as c with index r (beginning at zero)

Then

sum{r=0..inf} c_r = s - 1

Example:

s = sqrt(2)

Code:
Cs=   1.0000000                  .                .               .              .              .              .               .           0         0.34657359                .               .              .              .              .               .           0        0.060056627       0.12011325               .              .              .              .               .           0       0.0069380136      0.041628081     0.041628081              .              .              .               .           0      0.00060113307     0.0084158630     0.021640790    0.014427194              .              .               .           0     0.000041667369     0.0012500211    0.0062501054    0.010000169   0.0050000843              .               .           0    0.0000024068016    0.00014922170    0.0012996729   0.0037546105   0.0043322429   0.0017328972               .           0   0.00000011916198   0.000015014410   0.00021520654   0.0010009607   0.0020019213   0.0018017292   0.00060057639

The partial sums of V(1)~ * Cs converge very good, only the first few partial sums are given:

Code:
1.0000000            .            .             .             .              .   1.0000000   0.34657359            .             .             .              .   1.0000000   0.40663022   0.12011325             .             .              .   1.0000000   0.41356823   0.16174133   0.041628081             .              .   1.0000000   0.41416936   0.17015720   0.063268872   0.014427194              .   1.0000000   0.41421103   0.17140722   0.069518977   0.024427362   0.0050000843   1.0000000   0.41421344   0.17155644   0.070818650   0.028181973   0.0093323272   1.0000000   0.41421356   0.17157146   0.071033857   0.029182933    0.011334249   1.0000000   0.41421356   0.17157277   0.071063777   0.029393679    0.011984698   1.0000000   0.41421356   0.17157287   0.071067386   0.029430750    0.012150514   1.0000000   0.41421356   0.17157287   0.071067771   0.029436389    0.012185671      ...         ...            ...         ...           ...             ... = (s-1)^0      (s-1)^1      (s-1)^2      (s-1)^3       (s-1)^4       (s-1)^5
so
s - 1 ~ 0.41421356
---------------------------------------------------------------------------

Take Cs to any integral power to perform iteration, say Cs^y, denote the
entries of the second column as cy_r (they are meant to be different for
different y!)

y=2
sum{r=0..inf} cy_r = s^(s - 1)-1

Code:
Cs^2=   1.0000000                 .                .                .                .                .                 .                  .           0        0.12011325                .                .                .                .                 .                  .           0       0.028027638      0.014427194                .                .                .                 .                  .           0      0.0051933906     0.0067329815     0.0017328972                .                .                 .                  .           0     0.00087258195     0.0020331386     0.0012130805    0.00020814392                .                 .                  .           0     0.00013909595    0.00050073425    0.00050784250    0.00019427606   0.000025000843                 .                  .           0    0.000021254738    0.00010929866    0.00016468480    0.00010399801   0.000029168911   0.0000030029326                  .           0   0.0000031256524   0.000021966331   0.000045603341   0.000041826549   0.000019017611   0.0000042042874   0.00000036069200
The partial sums of V(1)~ * Cs^2 converge still very good, the first few partial sums:
Code:
1.0000000            .             .              .               .                .   1.0000000   0.12011325             .              .               .                .   1.0000000   0.14814089   0.014427194              .               .                .   1.0000000   0.15333428   0.021160175   0.0017328972               .                .   1.0000000   0.15420686   0.023193314   0.0029459776   0.00020814392                .   1.0000000   0.15434596   0.023694048   0.0034538201   0.00040241997   0.000025000843   1.0000000   0.15436721   0.023803347   0.0036185049   0.00050641798   0.000054169754   1.0000000   0.15437034   0.023825313   0.0036641083   0.00054824453   0.000073187365   1.0000000   0.15437078   0.023829461   0.0036754279   0.00056227479   0.000082316680   1.0000000   0.15437085   0.023830207   0.0036780174   0.00056641656   0.000085912777   1.0000000   0.15437085   0.023830335   0.0036785730   0.00056752723   0.000087142908   1.0000000   0.15437086   0.023830357   0.0036786861   0.00056780338   0.000087521005   1.0000000   0.15437086   0.023830360   0.0036787081   0.00056786794   0.000087627785   1.0000000   0.15437086   0.023830361   0.0036787123   0.00056788228   0.000087655927      ...         ...            ...         ...           ...             ... =           s^(s-1)-1     (s^(s-1)-1)^2   ...

and the result is in the second column:

s^(s-1)-1 = 0.15437086
-----------------------------------------------------------------------------------------
y=3

sum{r=0..inf} cy_r = s^(s^(s - 1)-1)-1

and so on.

==========================================================================
Continuous version.

Since Cs has triangular structure, its eigensystem is very simple.
The eigenvalues are just the powers of log(s), and the matrices of
eigenvectors are triangular. Also it appears, that their entries
do not change with increasing dimensions.

Let
Cs = Q * D * Q^-1

then for the example s=sqrt(2)

Code:
D = [  1.0000000   0.34657359   0.12011325   0.041628081  ... ] Q =   1.0000000               .              .             .           0       1.0000000              .             .           0      0.26519711      1.0000000             .           0     0.058953682     0.53039422     1.0000000           0     0.012370448     0.18823687    0.79559133           0    0.0025334009    0.056009589    0.38784957           0   0.00051044437    0.015103553    0.14956860           0   0.00010163175   0.0038231569   0.050149006
where D and Q are stabile for higher dimensions.

Now compute the half-iterate of CS by simply using the square-root
of D

Cs^0.5 = Q * D^0.5 * Q^-1

Code:
Cs^0.5=   1.0000000                    0                  0                0              0             0           0.58870501                  0                0              0             0          0.064212553         0.34657359                0              0             0         0.0026279354        0.075604503       0.20402961              0             0      -0.000053277975       0.0072174095      0.066763125     0.12011325             0     0.00000075647862      0.00027476287      0.010014456    0.052405048

And the half-iterate of the function is
Code:
1.0000000            .            .            .            .             .   1.0000000   0.58870501            .            .            .             .   1.0000000   0.65291756   0.34657359            .            .             .   1.0000000   0.65554550   0.42217809   0.20402961            .             .   1.0000000   0.65549222   0.42939550   0.27079273   0.12011325             .   1.0000000   0.65549298   0.42967027   0.28080719   0.17251830   0.070711274   1.0000000   0.65549420   0.42967122   0.28161261   0.18323707    0.10927517   1.0000000   0.65549417   0.42967248   0.28164602   0.18451886    0.11926607   1.0000000   0.65549414   0.42967260   0.28164764   0.18461316    0.12084026   1.0000000   0.65549414   0.42967258   0.28164786   0.18461814    0.12100355   1.0000000   0.65549414   0.42967257   0.28164786   0.18461850    0.12101552   1.0000000   0.65549414   0.42967257   0.28164786   0.18461852    0.12101631      ...         ...            ...         ...           ...             ...
and the interesting result is in 2'nd column

half-iterate = 0.65549414

The next half-iterate is then, as expected:
Code:
1.0000000   0.41421356   0.17157288   0.071067812   0.029437252    0.012193309

next(half-iterate) = 0.41421356

---------------------------------------------------------------

I don't know, for what reason the continuous version is assumed as impossible?

Gottfried

(edit of the introductional remark)
(update 2: edit of the introductoy tables, 2'nd (now third) table corrected (I forgot the column-scaling))
I think you should start with introducing matrix operators, which seems to be the approach of your solution to the tetration problem.
For example terms like the "constant matrix-operator", or "factorial-scaling of the matrix of Stirling-numbers 2'nd kind".

PS: I splitted this post from "iterability of exp(x)-1".
Yes, I'd be interested in seeing the math behind these matrix operators, so I can understand their relationship to my solution. Now that I'm working on a power series for my cheta function, I should soon be able to provide answers to very high precision (50-100 decimal places) with very low iteration counts (5-100 iterations, depending on the convergence radius).

As such, I'm hoping to be able to calculate the first few dozen derivatives of ${}^x e$ with high precision, and I'm even hopeful that a formula for the power series will present itself, probably related to the power series of the ${}^x \check \eta$.
*** I just tried my first latex-code in the hope to improve readability ***

Matrix method for tetration

Well - there is not much special here.

1) -----------------------------------------------------

Assume, you denote the summation of a powerseries

$\hspace{24} s(x) = \sum_{k=0}^\infty x^k * a_k$

as a vector-product:

$\hspace{24} s(x) = rowvector(1,x,x^2,x^3,...) * colvector(a_0,a_1,a_2,a_3,...)$

then it may be useful for the further analysis, to give names
for such vectors. I say

$\hspace{24} V(x) =colvector(1,x,x^2,x^3,...)$

Let the other vector be denoted as

$\hspace{24} A = colvector(a_0,a_1,a_2,a_3,...)$

then

$\hspace{24} s(x) = V(x)\sim * A$

Example: if

$\hspace{24} A = colvector(a_0,a_1,a_2,a_3,...) = (1,1,\frac1{2!},\frac1{3!},\frac1{4!},...)$

then V(x)~ * A represents simply the exponential-series in x, so

$\hspace{24} e^x = V(x)\sim * A$

2) -------------------------------------------------------

Now, if we want iteration, to get e^(e^x) that means, that
we simply need to set e^x=y and use y as paraemter for the
"powerseries"-vector (or better: "vandermonde"-vector, thus
the letter "V" at V(x))

$\hspace{24} e^y = V(y)\sim * A = e^{e^x}$

But what is V(y) now? It contains the powers of y, or the powers
of e^x. But by the previous formula

$\hspace{24} y = e^x = V(x)\sim * A$

we only have the "first power" of e^x. How to obtain the
other powers too?

To make in in one formula, we should -instead of a single
vectorproduct- have a full matrix-product, which provides all
required powers of e^x as well, and in the same one shot.
Something like

Code:
.   V(y)~ = V(x)~ [A0,  A1,    A2,     A3    ,  ... ]         =       [1 , e^x , (e^x)^2, (e^x)^3,  ... ]

where A1 is our already known vector A.

A2, for instance, is then simply

$\hspace{24} A2 = colvector(1, 2, \frac{2^2}{2!}, \frac{2^3}{3!}, \frac{2^4}{4!},...)$

since
Code:
.    (e^x)^2 = e^(2x) = sum (k=0..inf) (2x)^k/k!                     = sum (k=0..inf) 2^k * x^k / k!                     = sum (k=0..inf) 2^k/k! * x^k
and written as a vector-product it is

$\hspace{24} \left(e^x\right)^2 = V(x)\sim * colvector(\frac{2^0}{0!},\frac{2^1}{1!},\frac{2^2}{2!}, ....)$

This is completely analoguous for all powers of (e^x).

So
Code:
.   A0 = colvector(0^0/0!, 0^1/1!, 0^2/2!,...)   A1 = colvector(1^0/0!, 1^1/1!, 1^2/2!,...)   A2 = colvector(2^0/0!, 2^1/1!, 2^2/2!,...)   A3 = colvector(3^0/0!, 3^1/1!, 3^2/2!,...) ...     A_k= colvector(k^0/0!, k^1/1!, k^2/2!,...)

Call this collection of vectors B

$\hspace{24} B:=[A0,A1,A2,...]$

3) ----------------------------------------------------------

If we consider the collection of all A_k into the matrix B,
then we may also extract the common factorial denominators
into a diagonal "factorial"-vector F

Code:
.     F     = diagonal( 0! , 1! , 2! , 3! ,... )   F^-1  = diagonal(1/0!,1/1!,1/2!,1/3!,... )

then
Code:
.    A0 = F^-1  * V(0)    A1 = F^-1  * V(1)    A2 = F^-1  * V(2)    A3 = F^-1  * V(3)     ...   A_k = F^-1  * V(k)     ...

so we have

Code:
.   V(y)~ = V(x) * F^-1 * [V(0), V(1),   V(2) ,   V(3) , ....]         =               [1   , e^x , (e^x)^2, (e^x)^3,  ... ]

and for obvious reasons I denote the above collection of V-vectors
of subsequent parameters VZ so we have:
Code:
.      V(y)~ = V(x) * F^-1 * VZ            = V(e^x)~

and for more brevity I call the product F-1 * VZ

$\hspace{24} B := F^{-1} * VZ$

and we have the iterable matrix-operator:

$\hspace{24} V(x)\sim * B = V(e^x)\sim$

4) -------------------------------------------------------------

Introducing another diagonal Vandermonde-vector, with powers
of logs of a parameter s parametrizes this for s and I call the
resulting matrix Bs:

$\hspace{24} diag(V(\log{(s)})) * B = B_s$
and

$\hspace{24} V(x)\sim * B_s = V(s^x)\sim$

the iterable matrix-expression for obtaining

$\hspace{24} x,s^x,s^s^{\small x},s^s^{^{\small{s^x}}},\dots$

I call the matrix B a "constant", since it is independent
of the parameter s.

====================================================================

The rationale in short:

. We need a powerseries-vector in x and a exponential-series-vector
. to get one scalar result for s^x.
.
. But to make it iterable the result should again be not a scalar
. alone, but again a full vector of powers of s^x, we need not only
. one exponential-series-vector, but one for each power, thus a
. full matrix of such vectors.
.
. That matrix is B (resp. Bs) in the tetration case.

-------------------

Once having introduced a matrix as an operator for tetration
(the analoguous is valid for other iterated operations and
functions as well, just take the appropriate matrices), we
are in a situation to discuss iterations in terms of powers
of Bs, and if Bs has an accessible eigensystem, we are also
in the situation to define the fractional iteration by fractional
powers and even complex powers of Bs, thus the continuous
version of tetration.

In my first heuristic approach I used the matrix-logarithm
of Bs instead, and defined arbitrary powers by

$\hspace{24} B_s ^y = \exp \left( y * \log(B_s) \right)$

I found that this provides also numerical stable approximates
(with the same results, of course) but didn't investigate this
path deeper, since I thought, that the eigensystem-decomposition
is a more general or fundamental approach.

-------------------

Gottfried
Is it correct that your matrices correspond to the Carleman matrices, presented by Andrew in parabolic iteration?
bo198214 Wrote:Is it correct that your matrices correspond to the Carleman matrices, presented by Andrew in parabolic iteration?

It is possibly the case; I found my method by simple heuristics and have extreme difficulties to understand that carleman-matrix concept (due to lack of formal math education). Andrew Robbins kindly pointed me to that references; unfortunately I've never dealt with matrix-differentials et.al. and have already difficulties just to read that formulae.
But I personally assume the identity or at least relationship, mostly due to Andrew's reference and some more or less glances from the related article (though I cannot guarantee). Anyway, I think, I'll find out the exact relation later as my experiences grow...

Gottfried
Gottfried Wrote:
bo198214 Wrote:Is it correct that your matrices correspond to the Carleman matrices, presented by Andrew in parabolic iteration?

I've never dealt with matrix-differentials et.al. and have already difficulties just to read that formulae.

My question was rather whether your matrices correspond to the power derivation matrix. If I correctly understood you the k-th column of the matrix $B_b$ consist of the coefficients of the k-th power of the series of $b^x$.

This is the power derivation matrix transposed. If $P_f$ is the power derivation matrix of $f$ then is
$P_{f\circ g}=P_f P_g$ and hence $P_{f^{\circ n}}=P_f^n$. As Andrew mentioned the Carleman matrix is transposed to the Bell matrix (which is basicly the power derivation matrix but without the 0th power and without the 0-th coefficent), thatswhy I asked about the Carleman matrix.

Indeed for a parabolic fixed point at 0 your formula (in PDM form)
$P_{f^{\circ t}}=\exp(t\cdot\log(P_f))$
gives the regular iteration. So this a wonderful idea to simply extend this also to series with no fixed point.

And indeed I guess that this results in the same tetration as Andrew's approach. Can you supply a graph for your solution for different bases (e,$e^{1/e}$,$\sqrt{2}$) in the Comparing the known tetration solutions thread?!

But what is still not clear to me (I certainly am not that acquainted with matrices) how fits the Eigenanalysis in here. Perhaps you can also answer a question that recently occured to me:

In Maple there is a function "MatrixFunction" where you can apply an arbitrary real function to a matrix, by somehow manipulating the Eigenvalues.

Quote:The MatrixFunction(A) command returns the Matrix obtained by interpolating [lambda, F( lambda )] for each of the eigenvalues lambda of A, including multiplicities. Here the Matrix polynomial is r(lambda) = F(lambda) - p(lambda)*q(lambda) where p(x) is the characteristic polynomial, q(lambda) is the quotient, and r(lambda) is the remainder.

I think this also has something to do with the Lagrange interpolation that Andrew mentioned. Perhaps you can bring some light into this closely related topic.
Here is short list of data and a small graph for comparisions of method.
Sampledata
These data were generated using the eigensystem for the continuous tetration s,s^s,s^s^s,... . I've documented three bases s and (continuous) "heights" from 1..4 in steps of 1/10.
Applied matrix-dimension was 32.
Just to get an impression.

Gottfried
Gottfried Wrote:Here is short list of data and a small graph for comparisions of method.
Sampledata
These data were generated using the eigensystem for the continuous tetration s,s^s,s^s^s,... . I've documented three bases s and (continuous) "heights" from 1..4 in steps of 1/10.
Applied matrix-dimension was 32.
Just to get an impression.

For comparison with the existing graphs in Comparing the Know Tetration Solutions it would be optimal to provide the graph for base e in the range from $-248/128=-1.9375$ to $2$ there.
I just uploaded an update of the data based on Eigensystem-computation including a graph, which compares Hendryk's and the Eigensystem-solution: Result is practically identity.

Update and Comparision

Gottfried
Pages: 1 2 3 4