(05/01/2015, 01:57 AM)marraco Wrote: This is a numerical example for base a=e

(...)

So you get this systems of equations (blue to the left, and red to the right):

Code:

`[1 1 1 1 1 1 1 1 1] [ 1] [e]`

[0 1 2 3 4 5 6 7 8] [a₁] [e.a₁]

[0 0 1 3 6 10 15 21 28] [a₂] [e.a₂+e/2.a₁²]

[0 0 0 1 4 10 20 35 56] [a₃] [e.a₃+e.a₁.a₂+e/6.a₁³]

[0 0 0 0 1 5 15 35 70] * [a₄] = [e.a₄+e/2.a₂²+e.a₃.a₁+e/2.a₂.a1²+e/24.a₁⁴]

[0 0 0 0 0 1 6 21 56] [a₅] [...]

[0 0 0 0 0 0 1 7 28] [a₆] [...]

[0 0 0 0 0 0 0 1 8] [a₇] [...]

[0 0 0 0 0 0 0 0 1] [a₈] [...]

Quote:It is a non linear system of equations, and the solution for this particular case is:

Code:

`a₀= 1,00000000000000000`

a₁= 1,09975111049169000

a₂= 0,24752638354178700

a₃= 0,15046151104294100

a₄= 0,12170896032120000

a₅= 0,16084324512292400

a₆= -0,02254254634348470

a₇= -0,10318144159688800

a₈= 0,06371479195361670

(...)

This is perhaps a good starting point to explain the use of Carleman-matrices in my (Pari/GP-supported) matrix-toolbox, because you've just applied things analoguously to how I do this, only you didn' express it in matrix-formulae.

To explain the basic idea of a Carlemanmatrix:

consider a powerseries

We express this in terms of the dot-product of two infinite-sized vectors

where the column-vector A_1 contains the coefficients

and the row-vector

Now to make that idea valuable for function-composition /- iteration it would be good, if the output of such an operation were not simple a scalar, but of the same type ("vandermonde vector") as the input (

).

This leads to the idea of Carlemanmatrices: we just generate the vectors

where the vector

contains the coefficients for powers of f(x), such that

... in a matrix

getting the operation:

or

Having this general idea we can fill our toolbox with Carlemanmatrices for the composition of functions for a fairly wide range of algebra.

For instance the operation INC

and its h'th iteration ADD

is then only a problem of powers of P

The operation MUL needs a diagonal vandermonde vector:

The operation DEXP (= exp(x)-1) needs the matrix of Stirlingnumbers 2nd kind, similarity-scaled by factorials:

and as an exercise, we see, that if we right-compose this with the INC -operation, we get the ordinary EXP operator, for which I give the matrix-name B:

Of course, iterations of the EXP require then only powers of the matrix B.

To see, that this is really useful, we need a lemma on the uniqueness of power-series. That is, in the new matrix-notation:

If a function

is continuous for a (even small) continuous range of the argument x, then the coefficients in A_1 are uniquely determined.

That uniqueness of the coefficients in A_1 is the key, that we can look at the compositions of Carleman-matrices alone without respect of the notation with the dotproduct by V(x) and for instance, we can make use of the analysis of Carlemanmatrix-decompositions like

and can analyze

directly, for instance to arrive at the operation LOGP : log(1+x)

<hr>

Now I relate this to that derivation which I've quoted from marraco's post.

(05/01/2015, 01:57 AM)marraco Wrote: This is a numerical example for base a=e

(...)

So you get this systems of equations (blue to the left, and red to the right):

Code:

`[1 1 1 1 1 1 1 1 1] [ 1] [e]`

[0 1 2 3 4 5 6 7 8] [a₁] [e.a₁]

[0 0 1 3 6 10 15 21 28] [a₂] [e.a₂+e/2.a₁²]

[0 0 0 1 4 10 20 35 56] [a₃] [e.a₃+e.a₁.a₂+e/6.a₁³]

[0 0 0 0 1 5 15 35 70] * [a₄] = [e.a₄+e/2.a₂²+e.a₃.a₁+e/2.a₂.a1²+e/24.a₁⁴]

[0 0 0 0 0 1 6 21 56] [a₅] [...]

[0 0 0 0 0 0 1 7 28] [a₆] [...]

[0 0 0 0 0 0 0 1 8] [a₇] [...]

[0 0 0 0 0 0 0 0 1] [a₈] [...]

First we see the Pascalmatrix P on the lhs in action, then the coefficients

of the Abel-function

in the vector, say, A_1 . So the left hand is

To make things smoother first, we assume A as complete Carlemanmatrix, expanded from A_1. If we "complete" that left hand side to discuss this in power series we have

It is very likely, that the author wanted to derive the solution for the equation

; so we would have for the right hand side

and indeed, expanding the terms using the matrixes as created in Pari/GP with, let's say size of 32x32 or 64x64 we get very nice approximations to that descriptions in the rhs of the quoted matrix-formula.

What we can now do, depends on the above uniqueness-lemma: we can discard the V(x)-reference, just writing

and looking at the second column of B only

we get

as shown in the quoted post.

So indeed, that system of equations of the initial post is expressible by

and the OP searches a solution for A.

<hr>

While I've -at the moment- not yet a solution for A this way, we can, for instance, note that if A is invertible, then the equation can be made in a Jordan-form:

which means, that B can be decomposed by similarity transformations into a triangular Jordan block, namely the Pascalmatrix - and having a Jordan-solver for finite matrix-sizes, one could try, whether increasing the matrix-sizes the Jordan-solutions converge to some limit-matrix A.

For the alternative: looking at the "regular tetration" and the Schröder-function (including recentering the powerseries around the fixpoint) one gets a simple solution just by the diagonalization-formulae for triangular Carlemanmatrices which follow the same formal analysis using the "matrix-toolbox" which can, for finite size and numerical approximations, nicely be constructed using the matrix-features of Pari/GP.