Eigensystem of tetration-matrices
#1
Hi ,

I just uploaded a note for the discussion of the eigen-analysis. What I'm trying to do is to find an explicte analytical description of the eigensystem, so that we can quantify errors of approximation as well as possibly systematize exotic ranges for the base-parameter s on an analytical base.

I'm stuck a bit, but the current status of my knowledge is here and it looks pretty promising. Perhaps only one simple idea is missing...

[Update] I made a mistake concerning the "kernel"-matrix Q.
Q in that current definition is not independent of the s-parameter.
This assumption was made by another heuristic, comparing eigenmatrices for different base-parameters s. But accidentally I messed my versions of s1 and s2, so that they were equal...
Fortunately this affects only the further derivations, which are depending on that false assumption. I'll correct this today. So, in the shown version of Q the parameter t (=1.7, which was used) should also been reflected from third row on.
Sorry for this repeatedly messing up of things...
[/update]

Gottfried
Gottfried Helms, Kassel
#2
Yesterday I succeeded in
analytically solving the eigensystem-decomposition
of the tetration-matrix Bs.

Recall my usual notation:
----------------------------------------------------------------
\( \hspace{24} V(x) = column(x^0,x^1,x^2,...) \)
is a type of column-vector. This notation is only used, if the vector in an equation has this form of consecutive powers.

\( \hspace{24} \phantom{a}_dV(x) =diag(V(x)) \) is the same as diagonal-matrix

\( \hspace{24} V(x)\sim \) is the transpose of V, thus a row-vector

\( \hspace{24} F = column(0!,1!,2!,...) \hspace{24}\phantom{a}_dF=diag(F) \) the factorial column-vector resp its diagonal-version

\( \hspace{24} B:=b_{row,column}=b_{r,c}= \frac{c^r}{r!} \) with idices r,c beginning at zero,
is then the matrix which performs e -> e^e

\( \hspace{24} B_s:=bs_{row,column}=bs_r,c= \frac{(log(s)*c)^r}{r!} \) the matrix which performs s -> s^s,
so that
\( \hspace{24} V(s)\sim * B_s = V(s^s)\sim \)

The most simple case:
\( \hspace{24} V(1)\sim * B_s = V(s^1)\sim \)

and the continuous tetration, where s in the bounds described below:
\( \hspace{24} V(1)\sim * B_s^y = V(s\^\^y)\sim \)
----------------------------------------------------------------

Assume for the following
Code:
bounds for scalars
       t in the range 1/e < t < e , lt = log(t)
       s = t^(1/t)                , ls = log(s) = lt/t

Bs is then also defined as
\( \hspace{24}
B_s = \phantom{a}_dV(\log(s)) * B = \phantom{a}_dV(\frac{\log(t)}t) * B
\)

The eigensystem of Bs is then

\( \hspace{24}
B_s = W_s * \phantom{a}_d\Lambda * W_s^{-1}
\)

where dLambda is the diagonal matrix containing the eigenvalues l_k

First hypothese was :
\( \hspace{24}
\phantom{a}_d\Lambda = \phantom{a}_dV(lt) = \phantom{a}_dV(log(t))
\)

This hypothese fits the result, and allows consistent proof of the structural description of Ws.

Because it was simpler to analyze Ws^-1 I'll start with that description.

From Ws^-1 the vector dV(t) can be factored, and the remaining matrix is called Qs, which is still depending on s:
\( \hspace{24}
W_s^{-1} = Q_s * \phantom{a}_dV(t)
\)
or
\( \hspace{24}
Q_s = W_s^{-1} * \phantom{a}_dV(t)^{-1}
\)

Qs can further be factored
\( \hspace{24}
Q_s = X_s * P\sim
\)

where P is the lower triangular Pascal-matrix (of binomial-coefficients).

Having decomposed the structure of the entries in Xs analytically, this means now, to have the analytical solution also for Ws^-1.
In other words: we can easily compute the correct finite dimensional truncation of the theoretically infinite square-matrix Ws^-1. That was the aim of my consideration of this matrix.

It occurs also, that Xs, the only part depending on s (or t), is a triangular matrix, which can also be proven.

Its rows, up to a finite dimension d, can be computed by linear combinations of powers of log(t) at most up to the d'th power of log(t). So, if we assume logarithms to be exact, the d'th row of Xs can be determined exactly in d terms of powers of such logaritms of maximal exponent d and one reciprocal.

This is a very satisfactory result since we do no longer rely on numerical approximations by the implementations of numerical eigensystem-solvers, and can develop checks for bounds, quality of numerical approximations given a finite dimension d, and possibly extend to analytic continuation based on the formal description of W^-1.

----------------------------------------

However, to determine the non-inverse, W itself, is not non-trivial now.

On one hand, we have with the decomposition of W^-1 into two triangular matrices (and the final column-scaling)
\( \hspace{24}
W^{-1} = X_s * P\sim * \phantom{a}_dV(t)
\)
where each component can be exactly inverted for any finite dimension d, so that formally

\( \hspace{24}
W = \phantom{a}_d V(t)^{-1} * (P\sim)^{-1} * X_s^{-1}
\)

Let's denote the inverses PI = (P~)^-1, XIs = Xs^-1.

\( \hspace{24}
W = \phantom{a}_dV(\frac1t) * PI * XI_s
\)

then, on the other hand, the theoretical and practical matrix-multiplication PI * XIs involves summation of infinite series, and for the example t=2 these series are not absolutely convergent. So, unless I can make further progress in describing the entries in XIs too, this means to still approximate results for W itself, and thus still for the general powers of Bs.

However, the series, which have to be summed, have alternating signs and the growthrate of their terms are lower than exponential, so they can regularly summed by Euler-summation. This approach gave already satisfactory results for numerically determining W from W^-1 by this analytical way.
(But let this part, to improve computation of W, be the next step for further study)

-----------------------------------------

The derivation for the structure of the entries of W^-1 is relatively simple in terms of matrix-operations. However, since I don't see much discussion of the matrix-concept here, I'll describe this, if this is requested (I'm also a bit exhausted by the concentration last days).

One note, anyway: the way of proving the analytical solution involves solving of linear equations, which I now found to be also studied earlier, for instance in Keith BriggsConfusedchröder-equation page 12, about the Schröder-equations , and I think I'm beginning to understand these concepts first time.

Btw: I had a very nice, easy afternoon yesterday... :-)

Gottfried
Gottfried Helms, Kassel
#3
I've one improvement, which supports the estimation of bounds and the control over the error in the numerical approximation.

I stated:
Quote:\( \hspace{24}
W = \phantom{a}_dV(\frac1t) * PI * XI_s
\)

then, on the other hand, the matrix-multiplication PI * XIs involves summing infinite series of terms, and for the example t=2 these series are not absolutely convergent. So, unless I can make further progress in describing the entries in XIs too, this means approximate results for W itself, and still for general powers of Bs.

If we write the whole equation

\( \hspace{24}
V(y)\sim =V(x)\sim * Bs
\)

we have

\( \hspace{24}
V(y)\sim =V(x)\sim * \left(W * \phantom{a}_d\Lambda * W^{-1} \right) \\
\hspace{24}
V(y)\sim =V(x)\sim * \left(\phantom{a}_dV(\frac1t) * P^{-1}\sim * XI_s * \phantom{a}_d\Lambda * W^{-1} \right)
\)

and may use associativity and change order of computation:

\( \hspace{24} \begin{eqnarray}
V(y)\sim &&=&& \left(V(x)\sim * \phantom{a}_dV(\frac1t) * P^{-1}\sim \right) * \left( XI_s * \phantom{a}_d\Lambda * W^{-1} \right)
\end{eqnarray}
\)

to compute

\( \hspace{24} \begin{eqnarray}
V(x')\sim &&=&& \left(V(x)\sim * \phantom{a}_dV(\frac1t) * P^{-1}\sim \right) \\
&&=&& V(\frac{x}t-1)\sim
\end{eqnarray}
\)
first, which gives exact terms up to dimension d.

Then the remaining rhs of the formula gives also correct terms up to the finite dimension ...

\( \hspace{24} \begin{eqnarray}
V(z) &&=&& XI_s * \phantom{a}_d\Lambda * W^{-1} [,1]
\end{eqnarray}
\)

... and we have only to consider convergence of the terms of a simple vector-product

\( \hspace{24}
s^x = y= V(y)\sim[,1] = V(x')\sim * V(z)

\)

where the error due to finite truncation of the matrices occurs only in this last step and may be minimized, if convergence-acceleration can be applied.
Actually, I computed the terms for the series for the half-iterates in my other post this way.

Gottfried
Gottfried Helms, Kassel
#4
I think what you have done is similar what I describe now:

Given the exponentiation to base s: \( \exp_s \).
We know that it has the (lower) fixed point t with \( t^{1/t}=s \).
Let \( \tau_t(x)=x+t \) then
\( f:=\tau_t^{-1}\circ \exp_s \circ \tau_t \)
is a function with fixed point 0 and with \( f_1=f'(0)=s^{x+t}\ln(s)|_{x=0}=s^t\ln(s)=t\ln(s)=\ln(t) \).

Further it is known that a power series f with \( f_0=0 \), \( f_1\neq 1 \) and \( |f_1|\neq 1 \) is conjugate to the linear function \( \mu_{f_1}(x):=f_1x \). As in our case \( 1<s<e^{1/e} \) and hence \( 1<t<e \) we apply it to our \( f \):
\( f=\alpha\circ \mu_{\ln(t)}\circ \alpha^{-1} \)
\( \exp_s = \tau_t\circ \alpha \circ \mu_{\ln(t)}\circ \alpha^{-1}\circ \tau_t^{-1} \)
\( \exp_s^{\circ x} = \tau_t\circ \alpha \circ \mu_{\ln(t)^x}\circ \alpha^{-1}\circ \tau_t^{-1} \).

Now lets express this with power derivation matrices, let \( P(f) \) be the power derivation matrix of \( f \). Then we have:
\( P(\exp_s)=P(\tau_t)P(\alpha)P(\mu_{\ln(t)})P(\alpha)^{-1}P(\tau_t)^{-1} \).

We have the following correspondences
\( P(\mu_a)={}_dV(a) \)
\( P(\exp)=B^{\sim} \)
\( P(\exp_s)=P(\exp\circ\mu_{\ln(s)})=P(\exp)P(\mu_{\ln(s)})=B^{\sim}{}_dV(\ln(s))=({}_dV(\ln(s))B)^{\sim}=B_s^{\sim} \)
\( P(\mu_{\ln(t)})={}_d\Lambda={}_dV(\ln(t)) \)
\( P(\tau_t\circ\alpha)=(W_s^{-1})^{\sim}=(X_s P^{\sim} V(t))^{\sim}=V(t)^{\sim} P X_s^{\sim} \).

Now lets have a look at the structure of \( P(\tau_t) \). We can decompose \( \tau_t = \mu_t\circ \tau_1\circ \mu_{\frac{1}{t}} \) where \( P(\tau_1) \) is the lower triangular Pascal matrix given by \( (\tau_1)_{m,n}=\left(m\\n\right) \). This is because the \( m \)th row of the power derivation matrix of \( x+1 \) consists of the coefficients of \( (x+1)^m=\sum_{n=0}^m\left(m\\n\right)x^n \). As \( P(\mu_t)=V(t)^{\sim} \) we conclude
\( P(\mu_{\frac{1}{t}}\circ \alpha)=X_s^{\sim} \).

I think this completes the correspondences.
Unfortunately this decomposition works merely if the function under consideration has a fixed point. In so far it is very interesting that it also converges for the non-fixed point case with base \( b>e^{1/e} \).
#5
bo198214 Wrote:I think what you have done is similar what I describe now:

[snip]
I think this completes the correspondences.


Henryk -
this again is a super entry for my personal dictionary :-) . It helps to let thoughts converge. I also read the article of Aldrovandi & Freitas about iteration (which Daniel mentioned earlier) meanwhile and I also am beginning to see the relations between the concepts from there. I'll post the last step, how I derived the X-matrix, later today; I think it'll come out as another simple translation of formalisms.

I'll try to find an answer for the last remark of your post:
bo198214 Wrote:Unfortunately this decomposition works merely if the function under consideration has a fixed point. In so far it is very interesting that it also converges for the non-fixed point case with base \( b>e^{1/e} \).

Thanks again -

Gottfried
Gottfried Helms, Kassel
#6
Gottfried Wrote:I've one improvement, which supports the estimation of bounds and the control over the error in the numerical approximation.

This improvement needs a comment.
While it is indeed a theoretical improvement, it has numerically disadvantages, which I wasn't aware of in my previous posting.

The numerical results of the two ways of summing - the old way and the newly proposed way - are likely different; the approximations are not equivalent with truncated matrices. In fact, the new proposal resulted in smaller values than the old way, where the old way's results were apparently nearer to the true value - which can be shown, when the results are used as new input for the next iteration.
So here I need some more considerations; the terms of the new analytical solution are still the correct ones, but a better method for making things compatible, when truncated matrices are used, is required.

Gottfried
Gottfried Helms, Kassel
#7
Here I show the composition of terms, which are needed to compute the values for continuous tetration. In effect, the matrix-method provides one relevant column, whose entries are the coefficients for the powerseries in x, where

(1) V(x)~ * Bs = V(s^x) // basic definition
(2) V(x)~ * Bs[,1] = s^x // finally, we need only the second column
(3) V(x)~ * Bs^h[,1] = {s,x}^^h // h'th (continuous) power/iterate

and then we need only the second column of Bs to determine the result.

Let's denote the base-parameter as "s". Also

\( \hspace{24} t = h(s) \) where \( \hspace{24} s=t^{1/t} \)
\( \hspace{24} u = log(t) \)
\( \hspace{24} uh = u^h \)

The last notation was made for the analysis of the symbolically computed terms, to differentiate between the "naturally" occuring powers of u, which already occur in the terms with h=1 and the powers, which are introduced by the parameter h.

Let's call the entries of Bs[,1] (the relevant second column of Bs) b_r, where r is the row-index, beginning at zero.
Then
\( \hspace{24}
\left{s,x\right}\^\^h = \sum_{r=0}^{\infty} b_r * x^r
\)

Here I look at the structure of the b's.

From the analysis of the eigensystem-structure it is best to separate the b's in a product of f- and g- terms, where the f and g occur as entries of vectors F*G as best factorizing of the eigensystem-representation. Here F contains the parameter x and t, while G contains the parameters t, u and uh.
So that
\( \hspace{24}
b_r = f_r * g_r \)

The entries f_r are simple, shown in short at the end of the code-section.

The entries g_r are the more complicated ones.

It comes out, that each g_r can be expressed as a "nested polynomial" in t,u and uh. In effect, for each g_r exists a matrix of coefficients of dimension r (the row-index), which is independent of s,t and uh. It can be rescaled to have only integer coefficients. Call them "core-matrix for term g_r", C_r
For a single term g_r, one would start with this matrix, build polynomials in uh using C_r's rows, add these polynomials weighted by powers of u to get an intermediate value of, say, g'.
This value of g'_r is then the numerator of a fraction, where the denominator has the form of the product (u-1)*(u^2-1)*(u^3-1)... with r product terms.

The evaluation of this fraction is then, say, g"_r

These g"_r are now coefficients for powers of u, also scaled by factorials to make finally the g_r terms.

It is not needed to go more into detail here; the main focus of this explanation is to introduce the role of the "core-matrices" C_r.
If these are "known", for instance stored as constants in a program, then to build the terms for F and G, and finally of b_r is just applying the powers of t,u, and uh to their coefficients



One important information is here, that all g_r can be computed in finitely many steps.

A subsequent interesting question occurs then, looking at the rate of growth of the complexity of each term. The involved powers of the parameter u grow binomially with the row-index r. I'm wondering, whether from here an argument concerning the irrationality measure (transcendence) of the final values can be derived, based on the argument, that approximation by truncated series of b depending on r, powers of u to the exponent r^2 are involved...
But I'll leave this as speculation currently.

In the appendix an edited output of the symbolical composition of the first six terms.

Gottfried

-----------------------------------------
Code:
apt = APT_Init1(u,t,s,6)
aptLhs=APT_QsLhs(apt)          // gives symbolic representation for F
aptRhs=APT_QsRhsSym(apt,1)     // gives symbolic representation for G


%print Mat(aptRhs)  // RHS-vector G ====================================
// I fiddled with the output and rearranged to extract the Core-matrices for each term
t*dV(u)*dV(up)*dFac(-1)(             // needed rescalings
[ [1  ]

  [1 ]

1*V(1/u)                            // weighted sums of the rows of Core-matrices
  [1  - 1]
                 /(u - 1)           // denominator

u*V(1/u)
( [1 - 3 + 2]
+ [2 - 3 + 1]  )
                 /(u-1)/(u^2-1)

u^3*V(1/u)
(( [1 -  6 + 11 - 6]                     // <--- from Stirling kind 1 - mirrored
  + [5 - 18 + 18 - 5]
  + [6 - 18 + 18 - 6]
  + [6 - 12 +  7 - 1])                    // <--- from Stirling kind 2 - mirrored
                      /(u-1)/(u^2-1)/(u^3-1)

u^6*V(1/u)
  (  [1 -  10 +  35 -  50 + 24]
   + [9 -  60 + 130 - 105 + 26]
  + [24 - 120 + 215 - 165 + 46]
  + [40 - 180 + 275 - 180 + 45]
  + [46 - 170 + 230 - 130 + 24]
  + [36 - 120 + 145 -  75 + 14]
  + [24 -  60 +  50 -  15 +  1]    )
                             /(u-1)/(u^2-1)/(u^3-1)/(u^4-1)
   ]* dV(1/uh)

// ============== LHS- vector F  ===================================


aptLhs=APT_QsLhs(apt)
%1887 = [
1,
(-t + 1)  /t^1,
(-t + 1)^2/t^2,
(-t + 1)^3/t^3,
(-t + 1)^4/t^4,
(-t + 1)^5/t^5,
// ==================================================================
Gottfried Helms, Kassel
#8
It seems, that the eigensystem-based method works now fine.
Using the fixpoint-tracer and the values it supplies, I could construct the Bs-matrices with a few basic examples for s>e^(1/e) (or b>eta in the other notation here) based on the complex values for t and log(t).
The constructed matrices matched perfectly the naive-versions for integer-tetration Bs and Bs^h.

This settles then the continuous tetration for all s>e^-e (except the known singularities by the method)

The complex fixpoints t=h(s) for s>e^(1/e) are taken from the branch, which reaches into the real halfplane with real(t)>1 (the other branches there have real(t)<1) and is shown in the graphs at Fixpoint principal branch.

Phew. ;-)

It seeems, what remains is now consideration of numerical aspects and optimization of summing, where intermediate non-converging series occur.

Gottfried
Gottfried Helms, Kassel


Possibly Related Threads…
Thread Author Replies Views Last Post
  Bell matrices and Bell polynomials Daniel 11 3,375 12/18/2022, 04:41 AM
Last Post: JmsNxn
Question Analytic matrices and the base units Xorter 2 7,238 07/19/2017, 10:34 AM
Last Post: Xorter
  Matrices tetrated Gottfried 0 4,131 12/26/2008, 05:00 PM
Last Post: Gottfried



Users browsing this thread: 2 Guest(s)