I've found the key properties for the connection of the (matrix-) Jordanform of the Carleman-matrices and the "regular iteration".

Consider the (transposed, triangular) Carlemanmatrix U for the function f(x)= b^x-1 using some base b, where the infinite iteration converges for x in some convenient interval, so for instance b=2 .

The Carlemanmatrix-concept allows now to write

V(x) * U = V(f(x))

V(x) * U^2 = V(f(f(x)))

...

V(x) * U^h = V(f°^{h}(x))

...

V(x) * U^{-1} = V(f°^{-1}(x))

...

and so on, and thus allows to translate the function-iteration into matrix-powers.

If b<> exp(1) then "diagonalization" gives the connection with the concept of the schröder-function; by diagonalizing we get three matrices:

U = M * D * M^{-1}

where if the (triangular) M and M^-1 (let us write W for this now) are normed to have a unit-diagonal, they are the Carleman-matrices of the schröder-function \sigma(x) resp. its inverse such that we have

V(x) * M = V(\sigma(x))

V(\sigma(x)) * W = V(x)

and the diagonalmatrix D provides the possibility to express powers of U by powers of D such that

U^h = M * D^h * W

The coolest feature here is, that powers of D are made by powers of its scalar diagonal elements, and so we have by this the tool for arbitrary fractional and even complex powers of D, and thus of U and thus for the iteration-height of f(x).

This was impossible for base b=exp(1) because the diagonal in U is then the unit-vector and diagonalization cannot be done, and until now I just take the matrix-logarithm in this case to express fractional powers of U by the Log/Exp-expression

U^h = Exp ( h * Log( U))

While that matrices (having unit-diagonal) cannot be diagonalized they can be Jordan-decomposed, which is the best possible approximation to the diagonalization-concept. We get again

U = M * J * W

only that J is now not diagonal but has one additional unit-subdiagonal.

Powers of J cannot be computed simply like powers of D - again one has to introduce either the matrix-logarithm/-exponential for this matrix (but which has now a maximally simplified standard form), or the decomposition by the binomial-theorem for integer and/or fractional powers.

The really new information is now, that we do not get the Schröder-function, but we get an implementation of the Newton-series (or what I called the binomial-method for iteration), the product

V(x) * M = Y

gives now in the result vector Y the set of consecutive iterates of f(x) (in a slighty modified form). Then that iterates become just composed by the binomial-coefficients to allow integer and fractional iterations by compositions of the integer-iterations alone; it is the perfect implementation of that Newton-series-method, as for instance described in L.Comtet's "advanced combinatorics" written in the mid of the previous century (but Comtet seemingly didn't notice that relation to the Jordanform although he was also working with matrices).

Well, we do not get a new/better method for the computation of the fractional iterates; we are still left with the formal power series and their zero-radius of convergence for fractional iterates, but I think it is worth to record this additional connections between the Carlemanmatrix-/Schröderfunction concept and the Jordandecomposition and iteration-series - they give always valid (and the same) formal powerseries for the fractional iterates.

(The forum has still not MathJax activated; I'll convert this statement to Forum-TeX later)

Gottfried

Using google, found this earlier references:

http://www.sciencedirect.com/science/art...7X89902333

http://math.univ-lille1.fr/~gchen/articl...c-99.ps.gz

https://www.mittag-leffler.se/preprints/...03s-09.pdf

Consider the (transposed, triangular) Carlemanmatrix U for the function f(x)= b^x-1 using some base b, where the infinite iteration converges for x in some convenient interval, so for instance b=2 .

The Carlemanmatrix-concept allows now to write

V(x) * U = V(f(x))

V(x) * U^2 = V(f(f(x)))

...

V(x) * U^h = V(f°^{h}(x))

...

V(x) * U^{-1} = V(f°^{-1}(x))

...

and so on, and thus allows to translate the function-iteration into matrix-powers.

If b<> exp(1) then "diagonalization" gives the connection with the concept of the schröder-function; by diagonalizing we get three matrices:

U = M * D * M^{-1}

where if the (triangular) M and M^-1 (let us write W for this now) are normed to have a unit-diagonal, they are the Carleman-matrices of the schröder-function \sigma(x) resp. its inverse such that we have

V(x) * M = V(\sigma(x))

V(\sigma(x)) * W = V(x)

and the diagonalmatrix D provides the possibility to express powers of U by powers of D such that

U^h = M * D^h * W

The coolest feature here is, that powers of D are made by powers of its scalar diagonal elements, and so we have by this the tool for arbitrary fractional and even complex powers of D, and thus of U and thus for the iteration-height of f(x).

This was impossible for base b=exp(1) because the diagonal in U is then the unit-vector and diagonalization cannot be done, and until now I just take the matrix-logarithm in this case to express fractional powers of U by the Log/Exp-expression

U^h = Exp ( h * Log( U))

While that matrices (having unit-diagonal) cannot be diagonalized they can be Jordan-decomposed, which is the best possible approximation to the diagonalization-concept. We get again

U = M * J * W

only that J is now not diagonal but has one additional unit-subdiagonal.

Powers of J cannot be computed simply like powers of D - again one has to introduce either the matrix-logarithm/-exponential for this matrix (but which has now a maximally simplified standard form), or the decomposition by the binomial-theorem for integer and/or fractional powers.

The really new information is now, that we do not get the Schröder-function, but we get an implementation of the Newton-series (or what I called the binomial-method for iteration), the product

V(x) * M = Y

gives now in the result vector Y the set of consecutive iterates of f(x) (in a slighty modified form). Then that iterates become just composed by the binomial-coefficients to allow integer and fractional iterations by compositions of the integer-iterations alone; it is the perfect implementation of that Newton-series-method, as for instance described in L.Comtet's "advanced combinatorics" written in the mid of the previous century (but Comtet seemingly didn't notice that relation to the Jordanform although he was also working with matrices).

Well, we do not get a new/better method for the computation of the fractional iterates; we are still left with the formal power series and their zero-radius of convergence for fractional iterates, but I think it is worth to record this additional connections between the Carlemanmatrix-/Schröderfunction concept and the Jordandecomposition and iteration-series - they give always valid (and the same) formal powerseries for the fractional iterates.

(The forum has still not MathJax activated; I'll convert this statement to Forum-TeX later)

Gottfried

Using google, found this earlier references:

http://www.sciencedirect.com/science/art...7X89902333

http://math.univ-lille1.fr/~gchen/articl...c-99.ps.gz

https://www.mittag-leffler.se/preprints/...03s-09.pdf

Gottfried Helms, Kassel