bo198214 Wrote:What also burningly interests me how this looks with other functions that have an attractive fixed point.
I think for our example
with fixed points
this does no more work, that the eigenvalues of the Carleman matrices converge to the powers of the derivative at one fixed point.
Well, I see, that the sequence of sets of eigenvalues of truncations of increasing size of the Carleman-matrix C is not stabilizing. But we have this with the Bb-matrices as well, if we use b outside 1..e^(1/e) - so I don't think we have really a problem here. The canned eigensystem-solver do not work analytically, but with polynomials based on the truncation-size, giving best fitting results for that sizes.
The trace of C is a divergent sum; however, with Cesaro-sum of increasing orders, I get stabilization at ~ 2.0 from order 4.5, where order 1.0 means direct summation. (Surprisingly, Euler-summation seems not to applicable here). This result would back the assumtion of eigenvalues consisting of the infinite set (1,1/2,1/4,1/8,...)
-----------------------
I've done some eigenanalysis using the fixpoint-shifts, where fixpoint 1 is t0 = -1/4 and fixpoint 2 is t1=+1/4.
Call C the transposed Carleman-matrix for your function f(x), such that
as usual.
The triangular matrices U0 and U1 are generated by the functions g0(x) and g1(x), which represents the fixpoint-shifted versions, such that
 = g_0(x')'' )
or
\sim * U_0 = V(f(x)')\sim )
where

and

or
 = g_1(x')'' )
or
\sim * U_1 = V(f(x)')\sim )
where

and

(' and " indicate the appropriate shifting).
Then:
U0 has the eigenvalues
 )
U1 has the eigenvalues
With the assumtion, that these eigenvalues hold also as possible solutions for C, we need to show, that we can find infinitely many invariant vectors X_k such that
If I generate such eigenvectors via U0 resp U1 I get the following possible results (consistent with the assumtions of eigensystem-decomposition)
a1) U0 = W0^-1 * D0 * W0
or
a2) W0 * U0 = D0 * W0
b1) U1 = W1^-1 * D1 * W1
or
b2) W1 * U1 = D1 * W1
and rearranging the fixpoint-shift
c1) C = X0^-1 * D0 * X0 = X1^-1 * D1 * X1
or in vector-invariance-notation:
c2a) X0*C = D0 * X0
c2b) X1*C = D1 * X1
With fixpoint t0(=-1/4) I show the first 3 invariant vectors X0[0..2,0..inf]
Code:
´
1 -1/4 1/16 -1/64 1/256 -1/1024 ...
0 1 -1/2 3/16 -1/16 5/256 ...
0 -4 3 -3/2 5/8 -15/64 ...
such that
X0[0,] * C~ = 1 *X0[0,]
X0[1,] * C~ = 1/2*X0[1,]
X0[2,] * C~ = 1/4*X0[2,]
and with fixpoint t1(=1/4) I show the first 3 invariant vectors X1[0..2,0..inf]
Code:
´
1 1/4 1/16 1/64 1/256 1/1024 ...
0 1 1/2 3/16 1/16 5/256 ...
0 -4/3 1/3 1/2 7/24 25/192 ...
such that
X1[0,] * C~ = 1 *X1[0,]
X1[1,] * C~ = 3/2*X1[1,]
X1[2,] * C~ = 9/4*X1[2,]
which -when continued- proves, that both assumtions for eigenvalues of C lead to possible solutions for the eigenvectors.
The schroeder-functions
)
and their inverses are nasty; to determine their powerseries-coefficients we need for all except

divergent summation of high order as long as we don't have analytical expressions, so not only the result of the schroeder-functions are depending on truncation, but even their coefficents themselves are more or less good approximations. Only for

we have coefficients, based on evaluation of convergent series...