Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Matrix-method: compare use of different fixpoints
#19
bo198214 Wrote:Your other directly infinite approach looks quite as if it is the fixed points approach merely written with matrices. So I wouldnt call it a different approach (compared to the fixed point method).
Yes, since I understood this relation, I started using the term "fixpoint" and had my own approach to its understanding. It was somehow a surprise... ;-)

Quote:One more interesting fact about the (truncated) matrix operator method is that for it even chooses a fixed point! The Eigenvalues of the truncated B converge to the powers of the derivation at the *lower* fixed point! While for they do not converge the powers of the derivation at any (complex) fixed point of . This is striking! Some kind of own intelligence Wink
True. I love procedures, which have their own intelligence (may be more than oneself >;-) )

Quote:
Quote:it is useful for first approximations and gives good results for a certain range of parameters b,x and h (base,top-exponent and height)
What is a bad result in this context?
As I said below. That the degree of approximation is unknown (since the intractable numerical structure of the actual computation of the eigensystem), and that a solution for not so easy b finds nearly arbitrarily erratic eigenvalues - just due to the needs to have a matrix W(truncated(B)) which fit the needs for a perfect match of W*W^-1=I and trunc(B) * W[col] = W[col]*d[col] (where d[col] is a scalar here)

What I did not mention, but was always my greatest problem, is the difficulty to find even an eigensystem, if the size of B is >32 (using Pari/GP) and some >128 using Maple (or matlab - whatever some correspondents in de.sci.mathematik used when they computed one example for me). And for the purpose of a practical implementation of a tetration-module I doubt, that it is the best method to actually solve numerically for an eigensystem for each possibly supplied parameter.

If I have on the other hand the analytical description for the terms of the required eigenmatrices ready, depending on the parameters t and u, it would be a much more straightforward method to provide the terms of the series due to their finite polynomial description, determine the characteristic of divergence/convergence and apply an appropriate method for accelerating convergence, for instance an Euler-summation of appropriate order.

Quote:
Quote:But in my view, this was always only a rough approximation, whose main defect is, that we "don't know about the quality of approximation".

If the approximation converges it is neither rough nor fine (of course this is still not verified if I see it correctly, however the convergence of Andrew's method is also not yet verified, but we can work with it by supposing it and so we can do here).

Hmm, here my impressions from the numerical experiments are disappointing. With b outside the easy range we get the finite image of often slowly convergent series, not to mention the alternating divergent ones, which however may be summed, if enought terms are available. But the numerical eigensystem-solver are sharply restricted regarding to the number of terms - that's the worst problem.

Quote:
Quote:However, the structure of the set of eigenvalues is not as straightforward as one would hope. Especially for bases outside the range e^(-e) < b < e^(1/e) the set of eigenvalues has partially erratic behaviour, which makes it risky to base assumptions about the final values for the tetration-operation T on them.

For instance, I traced the eigenvalues for the same baseparameter, but increasing size of truncation, to see, whether we find some rules, which could be exploites for extrapolation.

But thats exactly the beauty and the potential (for investigation) of the method that you can not see a certain structure in the eigenvalues (yet) but somehow it provides a (even real) solution despite.

As you refer to "beauty", I must second that. I find that really nice :-) and a convincing and much attracting aspect for the novice, who starts exploring matrix-based tetration.


Quote:
Quote:Thus the need for a solution of exact eigensystem (infinite size) occurs. If we find one, then again the truncations lead to approximations - but we are in a situation to make statements about the bounds of error etc.
If I have solution to the infinite system why would I bother myself with truncated matrices?

Have you ever computed an infinite series, which does not have a known closed form? To have "a solution" means only: we have the precept, how to construct the terms of the series. We have then to have a sense for the convergence/divergence-characteristics and based on this may apply a procedure for accelerating convergence appropriate to the finite number of terms, which we have.

Quote:
Quote: B * W(B)[col] = d[col] * W(B)[col]
You probably mean
B * W(B)[col] = W(B)[col] * d[col]
well d[col] is a scalar here (the col'th eigenvalue)

Quote:
but thats just the matrix version of the Schroeder equation .
If you find a solution to the Schroeder equation you make the Carleman/column matrix out of it and you have a solution of this infinite matrix equation and vice versa if you have a solution W(B) to the above matrix equation that is the Carleman/column matrix of a powerseries then this power series is a solution to the Schroeder equation. Nothing new is gained by this consideration.
Hmm, so it is the implementation of the Schroeder-equation. And the "practical" approach via the eigensystem occurs as a good didactical tool for the novice to introduce to the Schroeder-concept, with the final step to construct it for the infinite case. This would be worth a chapter in an introductory book, I think then.

For me, what is new with it, is, that I have the description of terms of the required series (I never could derive them from the articles I found) and am able to actually deal with the needed series. (But I still have the problem, that the fractal power for non-easy bases does not work properly this way - perhaps the bug in my definition of the entries of the eigen-matrices is small)


Also, if the concept of the infinite eigenmatrix holds in a way, that I can apply all matrix-relations analytically, in algebraic equations, then this would also back my conjectures concerning tetration-series, the various types of infinite series of tetration-/powertower-terms, which I presented here. I never saw a discussion of such series so far. Well, my conjectures were concerned with integer bases and/or integer heights in their formulation, and for this the B-matrix suffices in principle. But to settle this for general heights (my second conjecture about the geometric-series analogue) I think an exact infinite eigensystem-description would be required.

Gottfried
Gottfried Helms, Kassel
Reply


Messages In This Thread
RE: Matrix-method: compare use of different fixpoints - by Gottfried - 11/12/2007, 01:53 AM

Possibly Related Threads...
Thread Author Replies Views Last Post
  The Promised Matrix Add On; Abel_M.gp JmsNxn 2 561 08/21/2021, 03:18 AM
Last Post: JmsNxn
  Revisting my accelerated slog solution using Abel matrix inversion jaydfox 22 30,989 05/16/2021, 11:51 AM
Last Post: Gottfried
  Which method is currently "the best"? MorgothV8 2 7,711 11/15/2013, 03:42 PM
Last Post: MorgothV8
  "Kneser"/Riemann mapping method code for *complex* bases mike3 2 10,047 08/15/2011, 03:14 PM
Last Post: Gottfried
  An incremental method to compute (Abel) matrix inverses bo198214 3 13,092 07/20/2010, 12:13 PM
Last Post: Gottfried
  SAGE code for computing flow matrix for exp(z)-1 jaydfox 4 13,328 08/21/2009, 05:32 PM
Last Post: jaydfox
  regular sexp:different fixpoints Gottfried 6 18,329 08/11/2009, 06:47 PM
Last Post: jaydfox
  Convergence of matrix solution for base e jaydfox 6 14,368 12/18/2007, 12:14 AM
Last Post: jaydfox



Users browsing this thread: 1 Guest(s)