10/20/2017, 06:00 PM

(10/19/2017, 09:33 PM)Gottfried Wrote:(10/19/2017, 04:50 PM)sheldonison Wrote:(10/19/2017, 10:38 AM)Gottfried Wrote: ...

1) The Carleman-matrix is always based on the power series of a function f(x) and more specifically of a function g(x+t_0)-t_0 where t_0 is the attracting fixpoint for the function f(x). For that option the Carleman-matrix-based and the serial summation approach evaluate to the same value.

2) But for the other direction of the iteration series, with iterates of the inverse function f^[-1] () we need the Carleman matrix developed at that fixpoint t_1 which is attracting for f^[-1](x) ...

So with the correct adapation of the required two Carleman-matrices and their Neumann-series we reproduce correctly the iteration-series in question in both directions.

Gottfried

Is there a connection between the Carlemann-matrix and the Schröder's equation, ? Here lambda is the derivative at the fixed point; , and then the iterated function g(x+1)= f(g(x)) can be generated from the inverse Schröder's equation:

Does the solution to the Carlemann Matrix give you the power series for ?

I would like a Matrix solution for the Schröder's equation. I have a pari-gp program for the formal power series for both , iterating using Pari-gp's polynomials, but a Matrix solution would be easier to port over to a more accessible programming language and I thought maybe your Carlemann solution might be what I'm looking for

Hi Sheldon - yes that connection is exceptionally simple. The Schröder-function is simply expressed by the eigenvector-matrices which occur by diagonalization of the Carleman-matrix for function f(x).

In my notation, with a Carlemanmatrix F for your function f(x) we have with a vector V(x) = [1,x,x^2,x^3,...]

Then by diagonalization we find a solution in M and D such that

The software must take care, that the eigenvectors in M are correctly scaled, for instance in the triangular case, (where f(x) has no constant term) the diagonal in M is the diagonal unit matrix I such that indeed M is in the Carleman-form. (Using M=mateigen(F) in Pari/GP does not suffice, you must scale the columns in M appropriately - I've built my own eigen-solver for triangular matrices which I can provide to you).

Then we have

We need here only to take attention for the problem, that non-triangular Carlemanmatrices of finite size - as they are only available to our software packages - give not the correct eigenvectors for the true power series of f(x). To learn about this it is best to use functions which have triangular Carleman-matrices, so for instance $f(x)=ax+b$ $f(x) = qx/(1+qx) $ or $f(x) = t^x-1 $ or the like where also the coefficient at the linear term is not zero and not 1.

For the non-triangular matrices, for instance for $f(x)=b^x$ the diagonalization gives only rough approximations to an -in some sense- "best-possible" solution for fractional iterations and its eigenvector-matrices are in general not Carleman or truncated Carleman. But they give nonetheless real-to-real solutions also for $b > \eta $ and seem to approximate the Kneser-solution when the size of the matrices increase.

You can have my Pari/GP-toolbox for the adequate handling of that type of matrices and especially for calculating the diagonalization for $t^x-1$ such that the eigenvectormatrices are of Carleman-type and true truncations of the \psi-powerseries for the Schröder-function (for which the builtin-eigensolver in Pari/GP does not take care). If you are interested it is perhaps better to contact me via email because the set of routines should have also some explanations with them and I expect some need for diadactical hints.

<hr>

For a "preview" of that toolbox see perhaps page 21 ff in http://go.helms-net.de/math/tetdocs/Cont...ration.pdf which discusses the diagonalization for $t^x -1$ with its schroeder-function (and the "matrix-logarithm" method for the $ e^x - 1$ and $ \sin(x)$ functions which have no diagonalization in the case of finite size).

For example, here is the pari-gp program for the formal inverse schroeder function. I don't know how to turn this into a matrix function, but not many programming languages support the powerful polyonomial functions that pari-gp has.

Code:

`formalischroder(fx,n) = {`

local(lambda,i,j,z,f1t,f2t,ns,f1s);

lambda = polcoeff(fx,1);

f1t=x;

i=2;

while (i<=n,

f1s=f1t;

f1t=f1t+acoeff*x^i+O(x^(i+1));

f2t=subst(f1t,x,lambda*x)-subst(fx+O(x^(i+1)),x,f1t);

z = polcoeff(f2t, i);

z = subst(z,acoeff,x);

ns=-polcoeff(z,0)/polcoeff(z,1);

f1t=f1s+ns*x^i;

i++;

);

return(Pol(f1t));

}

fz1=x^2+(1-sqrt(3))*x;

lambda1=polcoeff(fz2,1);

fs1=formalischroder(fz2,20);

superfunction1(z)=subst(fs2,x,lambda2^z);

- Sheldon