Gottfried Wrote:bo198214 Wrote:Is it correct that your matrices correspond to the Carleman matrices, presented by Andrew in parabolic iteration?

I've never dealt with matrix-differentials et.al. and have already difficulties just to read that formulae.

My question was rather whether your matrices correspond to the power derivation matrix. If I correctly understood you the k-th column of the matrix consist of the coefficients of the k-th power of the series of .

This is the power derivation matrix transposed. If is the power derivation matrix of then is

and hence . As Andrew mentioned the Carleman matrix is transposed to the Bell matrix (which is basicly the power derivation matrix but without the 0th power and without the 0-th coefficent), thatswhy I asked about the Carleman matrix.

Indeed for a parabolic fixed point at 0 your formula (in PDM form)

gives the regular iteration. So this a wonderful idea to simply extend this also to series with no fixed point.

And indeed I guess that this results in the same tetration as Andrew's approach. Can you supply a graph for your solution for different bases (e,,) in the Comparing the known tetration solutions thread?!

But what is still not clear to me (I certainly am not that acquainted with matrices) how fits the Eigenanalysis in here. Perhaps you can also answer a question that recently occured to me:

In Maple there is a function "MatrixFunction" where you can apply an arbitrary real function to a matrix, by somehow manipulating the Eigenvalues.

Quote:The MatrixFunction(A) command returns the Matrix obtained by interpolating [lambda, F( lambda )] for each of the eigenvalues lambda of A, including multiplicities. Here the Matrix polynomial is r(lambda) = F(lambda) - p(lambda)*q(lambda) where p(x) is the characteristic polynomial, q(lambda) is the quotient, and r(lambda) is the remainder.

I think this also has something to do with the Lagrange interpolation that Andrew mentioned. Perhaps you can bring some light into this closely related topic.