Hi Henryk, I needed some days for an answer, please excuse that. I had some difficulties to concentrate on the subject, but now, here it goes ....

Hmm, let me recall the question:

... and I assume now, this is the difference.

I would call this "practical approach"; it is useful for first approximations and gives good results for a certain range of parameters b,x and h (base,top-exponent and height)

One can approximate powers, fractional powers, and we could see, that even with low dimensions, fractional powers could be iterated and they even provide very good results, when multiples of integer powers were approached. I computed a lot of well approximated examples to support the general idea of an eigensystem-decomposition via this practical truncations.

But in my view, this was always only a rough approximation, whose main defect is, that we "don't know about the quality of approximation".

Indeed, the base-matrix Bb is a truncation of an exact ideally infinite matrix, so its actual entries are not affected by the size of the truncation, and thus is the best starting point.

Numerical eigensystem solutions for these truncated matrices satisfy, for instance, that W(truncation)*W(truncation)^-1 = I, as a perfect diagonal matrix, and satisfy good aproximations for some powers.

So these good properties suggest to use the eigensystem based on the truncated Bb and give it the status of an own "method".

However, the structure of the set of eigenvalues is not as straightforward as one would hope. Especially for bases outside the range e^(-e) < b < e^(1/e) the set of eigenvalues has partially erratic behaviour, which makes it risky to base assumptions about the final values for the tetration-operation T on them.

For instance, I traced the eigenvalues for the same baseparameter, but increasing size of truncation, to see, whether we find some rules, which could be exploites for extrapolation. See Eigenvalues b=0.5 or Eigenvalues t=1.7or the page Grpah..Eigenvalue e^(1/e)for instance.

Thus the need for a solution of exact eigensystem (infinite size) occurs. If we find one, then again the truncations lead to approximations - but we are in a situation to make statements about the bounds of error etc.

This means using W(B) as eigenmatrix/set of eigenvectors of B, we do not deal with W(truncated(B)) but truncated(W(B))).

My goal with my matrix-method is, to have an infinite matrix W(B), whose columns W(B)[col] satisfy together with the eigenvalue d[col]

B * W(B)[col] = d[col] * W(B)[col]

or

W(B)^-1[row] * B = d[row] * W(B)^-1[row]

seen as identity in the infinite case, and imprecise for the finite truncation. If a row in W(B)^-1 is also of the form of a powerseries, then this coincides furthermore with the concept of fixpoints, by definition of the properties of an eigenvector, and "marries" these two concepts and describes a common framework for them.

If my previous comments indeed match your point, then I can take position in a possible dissens/consens.

bo198214 Wrote:I dont understand this attitude (whatever you mean by "truncated by principle").

Hmm, let me recall the question:

bo198214 Wrote:We have something that is called the matrix operator method, this truncates (the Carleman matrix of ) to nSince you put the focus at this I assumed, that this is some principal aspect of the approach...

Quote:then decomposes uniquely via Eigenvalues

... and I assume now, this is the difference.

I would call this "practical approach"; it is useful for first approximations and gives good results for a certain range of parameters b,x and h (base,top-exponent and height)

One can approximate powers, fractional powers, and we could see, that even with low dimensions, fractional powers could be iterated and they even provide very good results, when multiples of integer powers were approached. I computed a lot of well approximated examples to support the general idea of an eigensystem-decomposition via this practical truncations.

But in my view, this was always only a rough approximation, whose main defect is, that we "don't know about the quality of approximation".

Indeed, the base-matrix Bb is a truncation of an exact ideally infinite matrix, so its actual entries are not affected by the size of the truncation, and thus is the best starting point.

Numerical eigensystem solutions for these truncated matrices satisfy, for instance, that W(truncation)*W(truncation)^-1 = I, as a perfect diagonal matrix, and satisfy good aproximations for some powers.

So these good properties suggest to use the eigensystem based on the truncated Bb and give it the status of an own "method".

However, the structure of the set of eigenvalues is not as straightforward as one would hope. Especially for bases outside the range e^(-e) < b < e^(1/e) the set of eigenvalues has partially erratic behaviour, which makes it risky to base assumptions about the final values for the tetration-operation T on them.

For instance, I traced the eigenvalues for the same baseparameter, but increasing size of truncation, to see, whether we find some rules, which could be exploites for extrapolation. See Eigenvalues b=0.5 or Eigenvalues t=1.7or the page Grpah..Eigenvalue e^(1/e)for instance.

Thus the need for a solution of exact eigensystem (infinite size) occurs. If we find one, then again the truncations lead to approximations - but we are in a situation to make statements about the bounds of error etc.

This means using W(B) as eigenmatrix/set of eigenvectors of B, we do not deal with W(truncated(B)) but truncated(W(B))).

My goal with my matrix-method is, to have an infinite matrix W(B), whose columns W(B)[col] satisfy together with the eigenvalue d[col]

B * W(B)[col] = d[col] * W(B)[col]

or

W(B)^-1[row] * B = d[row] * W(B)^-1[row]

seen as identity in the infinite case, and imprecise for the finite truncation. If a row in W(B)^-1 is also of the form of a powerseries, then this coincides furthermore with the concept of fixpoints, by definition of the properties of an eigenvector, and "marries" these two concepts and describes a common framework for them.

Quote:. And then defines . And we get the coeffecients of from the first column of .

I want your acknowledgement on this.

If my previous comments indeed match your point, then I can take position in a possible dissens/consens.

Gottfried Helms, Kassel