Hi Henryk -

- yes, I've no problem with it. My motive to retain the name was not to indicate something as new, but to not mix methods, which are possibly different. I hate the cases, where names are borrowed for something other than their original meaning. You said it several times, that my approach is just the regular tetration; but since I've dealt with something more (most prominently infinite series of tetration/its associated matrices), for which I've not seen a reference before although such generalizations are smehow obvious, I could not be without doubt, that I was introducing something going away.

If you identify it fully with the name "diagonalization"-method, I've no problem to adapt this in the future, because it contains the name of the general idea behind (which is also widely acknowledged), and that is always a good manner of names... So I'll try to get used to it hoping to not provoke protests and demonstrations...

What I'll keep anyway will be the term "operator" to indicate the class of matrices, whose second column are defined as the coefficients of a function (represented by a polynomial or powerseries) and the other columns are configured such that they provide the consecutive powers of that function, so that the matrix-multiplication of a vandermonde-vector (consecutive powers of one parameter) will again give a vandermonde-vector, and is thus (at least finitely often) iterable. (The Bell- and Carleman-matrices are -mutatis mutandis- instances of such matrix-"operators").

-------------------

(for the following formulae - I've to go through them in detail to see, whether they are of help to compute the coefficients. I think with the help of Aldrovandis Bell-/Carleman examples I'll be able to relate the both ways of views soon)

-------------------

Yepp, I already seem to have arrived at such a view. My latest Eigensystem-solver for this class of triangular matrices is already very simple, it's just a ~ten-liner in Pari/GP

This is a recursive approach, very simple and I could imagine it implements just the g- and f- recursions in your post.

bo198214 Wrote:Gottfried, as long as you deal with functions with fixed point at 0, for example or there is no need to apply the Diagonalization method in its full generality.

Note, that I would like to change the name from matrix operator method to diagonalization method, as this is more specific (for example Walker uses the name "matrix method" for his natural Abel method).

- yes, I've no problem with it. My motive to retain the name was not to indicate something as new, but to not mix methods, which are possibly different. I hate the cases, where names are borrowed for something other than their original meaning. You said it several times, that my approach is just the regular tetration; but since I've dealt with something more (most prominently infinite series of tetration/its associated matrices), for which I've not seen a reference before although such generalizations are smehow obvious, I could not be without doubt, that I was introducing something going away.

If you identify it fully with the name "diagonalization"-method, I've no problem to adapt this in the future, because it contains the name of the general idea behind (which is also widely acknowledged), and that is always a good manner of names... So I'll try to get used to it hoping to not provoke protests and demonstrations...

What I'll keep anyway will be the term "operator" to indicate the class of matrices, whose second column are defined as the coefficients of a function (represented by a polynomial or powerseries) and the other columns are configured such that they provide the consecutive powers of that function, so that the matrix-multiplication of a vandermonde-vector (consecutive powers of one parameter) will again give a vandermonde-vector, and is thus (at least finitely often) iterable. (The Bell- and Carleman-matrices are -mutatis mutandis- instances of such matrix-"operators").

-------------------

(for the following formulae - I've to go through them in detail to see, whether they are of help to compute the coefficients. I think with the help of Aldrovandis Bell-/Carleman examples I'll be able to relate the both ways of views soon)

-------------------

Quote:You see, we not even have to solve a linear equation system for using the diagonalization method on a fixed point at 0.

Yepp, I already seem to have arrived at such a view. My latest Eigensystem-solver for this class of triangular matrices is already very simple, it's just a ~ten-liner in Pari/GP

Code:

`\\ Ut is the lower triangular matrix-operator for the function`

\\ Ut := x -> (t^x - 1)

\\ for t=exp(1) Ut is the factorially scaled matrix of Stirling-numbers 2'nd kind

\\ Ut may even contain the log(t)-parameter u in *symbolic* form

\\ use dim<=32 in this case to prevent excessive memory and time-consumtion.

\\ Also use exact arithmetic then; provide numfmt=1 instead of =1.0

{ APT_Init2EW(Ut,dim=9999,numfmt=1.0)=local(tu=Ut[2,2],UEW,UEWi,tt=exp(tu),tuv) ;

dim=min(rows(Ut),dim);

tuv=vectorv(dim,r,tu^(r-1)); \\ the eigenvalues

UEW=numfmt*matid(dim); \\ the first eigenmatrix UEW

for(c=2,dim-1,

for(r=c+1,dim,

UEW[r,c] = sum(k=c,r-1,Ut[r,k]*UEW[k,c])/(tuv[c]-tuv[r])

));

UEWi=numfmt*matid(dim); \\ the second eigenmatrix UEWi = UEW^-1

for(r=3,dim,

forstep(c=r-1,2,-1,

UEWi[r,c]=sum(k=0,r-1-c,Ut[r-k,c]*UEWi[r,r-k]) /(tuv[r] -tuv[c])

));

return([[tu,tt,exp(tu/tt)],UEW,tuv,UEWi]);

\\ Ut = UEW * matdiagonal(tuv) * UEWi

}

This is a recursive approach, very simple and I could imagine it implements just the g- and f- recursions in your post.

Gottfried Helms, Kassel