Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Matrix-method: compare use of different fixpoints
#11
bo198214 Wrote:But this is than another method. The matrix operator method is to use truncated Matrices with unique Eigensystem decomposition and hence with a unique limit. That was the absolute good thing about this method as we dont have to take fixed points (with the attached question of which to choose) into consideration.

For the infinite matrices there is not even only one solution for each fixed point but also all the other possible solutions come into play, i.e. the non-regular Schroeder functions. So how should this help? If we know there are some solutions out there which we also want to have and than modify our method (that was designed to be unique) also to include the other solutions???
Hmm, I still don't see the problem here - may be, simply due to limitations of my horizon of knowledge.
First - I have no problem, if it comes out, that each entry of Bs can be computed in two or more different ways: this wouldn't then affect the validitiy of the use of Bs for the computation of tetration. We would still have the same coefficients for our powerseries.

Second - If two vectors
V(y0)*Bs = 1*V(y0) ~
V(y1)*Bs = 1* V(y1) ~
both satisfy these equation, then the two vectors satisfy the requirement of the definition being an eigenvector (see note 1). For the tetration-operator we know (simply by the occurence of multiple fixpoints), even multiple solutions for this must exist.
Now they have the same eigenvalue - but the eigenvalue occurs only once in the set of eigenvalues: so the special form of Bs suggests, that uniqueness is *not* given (in this sense -without affecting Bs itself) by the very problem of tetration and its operator (or: by its powerseries).
The matrix-method reflects this ambiguity perfectly - so this is even an argument *pro* appropriateness of the method.

Third - this ambiguity without consequences in the simple case (power of Bs is 1, so the above seems valid to define the same matrix Bs^1 expressed by identity of their entries) may introduce consequences in the question of powers of Bs. But for integer powers we may insert our truncated Eigensystems and thus use approximated and truncated Bs and still get identical (? let the unresolved "Bummer"-caveats aside for the moment) matrices for powers of Bs (I've done few numerical examples)

Forth - even if the matrices for integer powers should indeed be identical, then this does not mean they are identical for fractional powers. This is one of my current investigations, in fact, for fractional powers my method of construction of eigenmatrices based on different values for u (using the different branches) gives strange results, if I do not use the principal branches. So, if there should be no error in my composer-procedures, then we had indeed different versions of Bs^h (where h is non-integer) and we had a problem.
But since the construction of the eigenmatrices is complicated I assume an error of the type:
t^(1/t) =/= exp(log(t)/t) if for log(t) is a non-principal branch is required, but the model for the composition is developed based on the assumptions of principal branch of log(t) only. I did not consider this problem at all, when I developed the eigensystem-constructor, so by a careful reconsideration I may resolve this problem. Or not. If it persists - only then the/my matrix-method has a severe fundamental shortcoming. It may still be valid if only the first fixpoint is used, but the aesthetics of it were spoiled (at least in my view).

----

(1) more precisely of Bs~, but let this aside here
Gottfried Helms, Kassel
Reply
#12
Seems we have a basic misunderstanding here, though I dont know where yet. Ok, let me nail the facts:

We have something that is called the matrix operator method, this truncates (the Carleman matrix of ) to n
then decomposes uniquely via Eigenvalues . And then defines . And we get the coeffecients of from the first column of .
I want your acknowledgement on this.

If we let aside the finite truncation then the matrices are no more of much help as they are simply another way of expressing the Schroeder method:

where is the Schroeder function (corresponding to the matrix W), which merely must satisfy (which is our disguised Schroeder equation ) and which is usually applied on a function with fixed point at 0 with .

The problem with this or with the directly related Abel equation is the non-uniqueness. One can force uniqueness when considering (analytic) development in a fixed point. This coincides with the regular iteration.
Outside a fixed point we can arbitrarily (by those sine modifications) deform solutions for getting new solutions.
And the additional problem exists that regular solutions at different fixed points usually dont coincide.

These problems comes with the infinite number of coefficients and are caried over 1-1 if we express the problem with matrices instead of analytic functions. So I can not see what we can get from matrices what we not already know about the analytic functions. But perhaps you have to explain in more detail what you mean by
Quote:based on an analytical description of each entry, then I actually work with finite truncations of an assumed infinite matrix, which may provide multiple solutions for the same composed theoretical result matrix.

And we have to think about a name for this additional method.
Reply
#13
bo198214 Wrote:Seems we have a basic misunderstanding here, though I dont know where yet. Ok, let me nail the facts:

We have something that is called the matrix operator method, this truncates (the Carleman matrix of ) to n
then decomposes uniquely via Eigenvalues . And then defines . And we get the coeffecients of from the first column of .
I want your acknowledgement on this.


Well, you say it in the next sentence: let truncation aside. In all my considerations truncation always meant only the need for practical implementation and thus assumes approximative results (however good and extrapolatable for the infinite-thought series). If I want to employ the exponential-series to compute e^x I need the notion of infinitely many terms and in turn, if expressed as a vector/vector or vector/matrix-multiplication of inherently infinite size. I never thought, that a/the matrix-method could reduce the problem to one which requires only finitely truncated series.

Quote:
But perhaps you have to explain in more detail what you mean by
Quote:based on an analytical description of each entry, then I actually work with finite truncations of an assumed infinite matrix, which may provide multiple solutions for the same composed theoretical result matrix.

I observed a certain structure, which the eigenvalues of the most stable matrices Bs showed. From there I took my first hypothese about the structure of the set of eigenvalues: they form the powerseries of the u =log(t) of the first fixpoint t (base b = t^(1/t)) (where the notion of "fixpoint" was unfamiliar to me and never used it in the beginning).

Second I observed other structures in the empirical eigen-matrices, which stabilized with higher dimensions and suggested some basic assumptions about the composition of some entries, for instance the first two rows. So I had a hypothese about an analytical description of the form of each entry wi(r,c) in WI = W^-1 of the first two rows, say

wi(0,c) = t^c
wi(1,c) = binomial(c,1)*t^c

I could then verify this as a possibility by setting

WI[,0] * Bs = u^0 * WI[,0]
and
WI[,1] * Bs = u^1 * WI[,1]
by simple elementary consideration of the implicte exponential series.

Then the next empirical rows in WI seemed to approximate (and stabilize with higher dimension) to some composition of binomially weighted powerseries coefficients. By considering some possibilities for the entries in wi(2,c) it seemed they were binomially weighted compositions like wi(0,c) and wi(1,c), though a bit more complicated.

In short, with some more hypotheses and analytical considerations I got finally an idea, how I could find such compositions for all subsequent rows. This needed then an eigensystem-related computation, but only of a triangular system, which can be solved using finitely many terms for each wi(row,col) . It occured, that WI was composed by a matrix-multiplication of some XI by P~ , where P is a constant(the pascal-matrix) and XI is triangular.

Finally I had a process, which can compute the composition of each entry xi(r,c) in XI as a finite two-variable polynomial of the formal parameters u and t, where the order of t is the same as the row-index and the order of u is binomial(rowindex,2) (or opposite, don't have it at hand). Thus I have a description of XI, independent of the size of the matrix/its truncation parameter, which together with P~ give a matrix W^-1, which satisfies the properties of an eigenmatrix for Bs together with a diagonal D containing the consecutive powers of u - and: for the assumption of infinite size.

Since XI=X^-1 is triangular, also each entry in X can be computed exactly from this, and I have implicitely the formal description of the polynomials in their parameters (t,u) of the triangular core X*D*X^-1 , and could store this polynomial descriptions for each term in a database... I'm sometimes working on this a bit, but the polynomials (and thus the memory for its representation) of the individual terms grow quadratically with its row-index, so actually I still do the whole computation of terms on the fly with the current numerical parameters numerically.

That means: if I speak about the eigensystem W,D,W^-1 I do not speak about the eigensystem of a truncated matrix (as retrieved by a numerical eigensystem-solver) but about the entries of an infinite eigenmatrix whose infinite extension expresses the required powerseries for the tetration-operation (and of which the truncation is then comparable with the truncation that we use, if we compute, say, exp(x)) and their formal description in terms of the parameters t and u.

Quote:And we have to think about a name for this additional method.

Ok, if it introduces non-uniqueness now, and it seems to be a rather private, say "my matrix method", what do you think of "my-oh-my" (or mei-o-mei)-method ? ;-)

Gottfried
Gottfried Helms, Kassel
Reply
#14
Gottfried Wrote:
bo198214 Wrote:I want your acknowledgement on this.

Well, you say it in the next sentence: let truncation aside. In all my considerations ...
Is this a confirmation?

Gottfried though I am not 100% clear about your my-oh-my method, it seems as if it does nothing more than to find the regular iteration at a fixed point, we discussed something similar already here.

The regular iteration of a function with fixed point at 0 is given by
with , and being the principal Schroeder function.
If the fixed point is not at 0 but is at then is a function with fixed point at 0 and , where . Putting this into the first equation we get:
.

But if we translate this into matrix notation by simply replacing by the matrix multiplication and replacing each function by the corresponding Bell matrix (which is the transposed Carleman matrix) then we see a diagonalization of because the Bell matrix (and also the Carleman matrix) of is just your diagonal matrix !
So it is nothing new, that we can diagonalize the untruncated for each fixed point with the diagonal matrix . As result we only get the plain old regular iteration at a fixed point.
Reply
#15
bo198214 Wrote:
Gottfried Wrote:
bo198214 Wrote:I want your acknowledgement on this.

Well, you say it in the next sentence: let truncation aside. In all my considerations ...
Is this a confirmation?
I can only restate: in no situation I assumed the matrices as truncated by principle - all my considerations assume the unavoidable truncations in praxi as giving approximations in numerical evaluation as plcaeholder for the basically infinite matrices. I never discussed a finite (truncated) matrix being more than such an approximation for determination of any intermediate result.
For instance: if the carleman-matrix is thought of finite size, then I don't see any relation between carleman matrices and any of my matrices.
I don't know, whether this satisfies your request (if not, then please give even more explanation of what is the core of your question, I must then being unable to understand the relevant implications correctly)

Quote:Gottfried though I am not 100% clear about your my-oh-my method, it seems as if it does nothing more than to find the regular iteration at a fixed point, we discussed something similar already here.

That may all be - and whether it is "nothing more" or not: if it is the regular iteration, then fine; if not, then fine again.
Quote:The regular iteration of a function with fixed point at 0 is given by
with , and being the principal Schroeder function.
If the fixed point is not at 0 but is at then is a function with fixed point at 0 and , where . Putting this into the first equation we get:
.

But if we translate this into matrix notation by simply replacing by the matrix multiplication and replacing each function by the corresponding Bell matrix (which is the transposed Carleman matrix) then we see a diagonalization of because the Bell matrix (and also the Carleman matrix) of is just your diagonal matrix !
Yes, this seems so - but did we not already state the identity of the matrix B (or Bs) with the Bell/Carleman-transposes? I thought, that this had settled the question already? I was very happy, when you pointed out the relation in one of your previous posts - I couldn't have done it due to my lack of understanding of those concepts (described in elaborated articles, more than I could follow in detail).

Quote:So it is nothing new, that we can diagonalize the untruncated for each fixed point with the diagonal matrix . As result we only get the plain old regular iteration at a fixed point.

Hmm, for me this is an *achievement*, so my bottom up approach just out of the sandbox is then decoded into the terminology of "Schroeder-equation" and "regular iteration" - so: good!

If from there some shortcomings of the method are already *known* then I would like to know them, too.

Hmm, I've no more idea at this moment. I'll reread your post later this evening, perhaps I'm missing some point.

Gottfried
Gottfried Helms, Kassel
Reply
#16
Gottfried Wrote:I can only restate: in no situation I assumed the matrices as truncated by principle - all my considerations assume the unavoidable truncations in praxi as giving approximations in numerical evaluation as plcaeholder for the basically infinite matrices. I never discussed a finite (truncated) matrix being more than such an approximation for determination of any intermediate result.
I dont understand this attitude (whatever you mean by "truncated by principle"). The matrix operator method is based on the unique diagonalization of *finite* matrices (in the same way as Andrew's method is based on the solution of *finite* equation systems). Of course they are used to (hopefully) approximate the diagonalization of the infinite matrix which then facilitates the iteration of the function. But the essential idea is that we can not handle the infinite case (continuum many solutions), but we can uniquely handle the finite approximating cases and in this way get to a unique solution (again like with Andrew's solution).

The interesting (for me) about the matrix operator method was that it chose one solution for developments at non-fixed points, out of the infinitely many possible solutions.
I mean, hey, that about is all our discussion about tetration, to chose one "best" solution. Who needs a method that provides all solutions?

Quote:For instance: if the carleman-matrix is thought of finite size, then I don't see any relation between carleman matrices and any of my matrices.
Dont know what you mean by this. The Carleman matrix is of infinite size but approximated by finite truncations. And sometimes perhaps Carleman matrix refers also to the truncated matrices.


Gottfried Wrote:
bo198214 Wrote:But if we translate this into matrix notation by simply replacing by the matrix multiplication and replacing each function by the corresponding Bell matrix (which is the transposed Carleman matrix) then we see a diagonalization of because the Bell matrix (and also the Carleman matrix) of is just your diagonal matrix !
Yes, this seems so - but did we not already state the identity of the matrix B (or Bs) with the Bell/Carleman-transposes? I thought, that this had settled the question already? I was very happy, when you pointed out the relation in one of your previous posts - I couldn't have done it due to my lack of understanding of those concepts (described in elaborated articles, more than I could follow in detail).
Yes, translates into and translates into something similar to the lower Pascal matrix. You can however put everything into the transposed view (as you use it) if you swap the order of the operands of . But for diagonalization question this does not matter.

Quote:If from there some shortcomings of the method are already *known* then I would like to know them, too.
The shortcoming is that it only works on fixed points, and it is different for different fixed points, so nobody knows which is the "best" fixed point to choose. Also for there are only complex fixed points and the regular iteration at complex fixed points yields non-real values for real arguments which is not desirable. (The matrix operator method does not have this deficit.)
Reply
#17
Hi Henryk, I needed some days for an answer, please excuse that. I had some difficulties to concentrate on the subject, but now, here it goes ....


bo198214 Wrote:I dont understand this attitude (whatever you mean by "truncated by principle").

Hmm, let me recall the question:
bo198214 Wrote:We have something that is called the matrix operator method, this truncates (the Carleman matrix of ) to n
Since you put the focus at this I assumed, that this is some principal aspect of the approach...
Quote:then decomposes uniquely via Eigenvalues

... and I assume now, this is the difference.

I would call this "practical approach"; it is useful for first approximations and gives good results for a certain range of parameters b,x and h (base,top-exponent and height)
One can approximate powers, fractional powers, and we could see, that even with low dimensions, fractional powers could be iterated and they even provide very good results, when multiples of integer powers were approached. I computed a lot of well approximated examples to support the general idea of an eigensystem-decomposition via this practical truncations.

But in my view, this was always only a rough approximation, whose main defect is, that we "don't know about the quality of approximation".
Indeed, the base-matrix Bb is a truncation of an exact ideally infinite matrix, so its actual entries are not affected by the size of the truncation, and thus is the best starting point.

Numerical eigensystem solutions for these truncated matrices satisfy, for instance, that W(truncation)*W(truncation)^-1 = I, as a perfect diagonal matrix, and satisfy good aproximations for some powers.

So these good properties suggest to use the eigensystem based on the truncated Bb and give it the status of an own "method".

However, the structure of the set of eigenvalues is not as straightforward as one would hope. Especially for bases outside the range e^(-e) < b < e^(1/e) the set of eigenvalues has partially erratic behaviour, which makes it risky to base assumptions about the final values for the tetration-operation T on them.

For instance, I traced the eigenvalues for the same baseparameter, but increasing size of truncation, to see, whether we find some rules, which could be exploites for extrapolation. See Eigenvalues b=0.5 or Eigenvalues t=1.7or the page Grpah..Eigenvalue e^(1/e)for instance.

Thus the need for a solution of exact eigensystem (infinite size) occurs. If we find one, then again the truncations lead to approximations - but we are in a situation to make statements about the bounds of error etc.
This means using W(B) as eigenmatrix/set of eigenvectors of B, we do not deal with W(truncated(B)) but truncated(W(B))).
My goal with my matrix-method is, to have an infinite matrix W(B), whose columns W(B)[col] satisfy together with the eigenvalue d[col]

B * W(B)[col] = d[col] * W(B)[col]

or

W(B)^-1[row] * B = d[row] * W(B)^-1[row]

seen as identity in the infinite case, and imprecise for the finite truncation. If a row in W(B)^-1 is also of the form of a powerseries, then this coincides furthermore with the concept of fixpoints, by definition of the properties of an eigenvector, and "marries" these two concepts and describes a common framework for them.


Quote:. And then defines . And we get the coeffecients of from the first column of .
I want your acknowledgement on this.


If my previous comments indeed match your point, then I can take position in a possible dissens/consens.
Gottfried Helms, Kassel
Reply
#18
Gottfried Wrote:Since you put the focus at this I assumed, that this is some principal aspect of the approach...
Quote:then decomposes uniquely via Eigenvalues

... and I assume now, this is the difference.

I would call this "practical approach";
Well, I dont want to dive into the philosophical difference between practical and theoretical approach. For me counts that it is a clear definition and that the result of this approach is different from the other approaches (as far as I can see).

Your other directly infinite approach looks quite as if it is the fixed points approach merely written with matrices. So I wouldnt call it a different approach (compared to the fixed point method).

One more interesting fact about the (truncated) matrix operator method is that for it even chooses a fixed point! The Eigenvalues of the truncated B converge to the powers of the derivation at the *lower* fixed point! While for they do not converge the powers of the derivation at any (complex) fixed point of . This is striking! Some kind of own intelligence Wink

Quote:it is useful for first approximations and gives good results for a certain range of parameters b,x and h (base,top-exponent and height)
What is a bad result in this context?


Quote:But in my view, this was always only a rough approximation, whose main defect is, that we "don't know about the quality of approximation".

If the approximation converges it is neither rough nor fine (of course this is still not verified if I see it correctly, however the convergence of Andrew's method is also not yet verified, but we can work with it by supposing it and so we can do here).

Quote:However, the structure of the set of eigenvalues is not as straightforward as one would hope. Especially for bases outside the range e^(-e) < b < e^(1/e) the set of eigenvalues has partially erratic behaviour, which makes it risky to base assumptions about the final values for the tetration-operation T on them.

For instance, I traced the eigenvalues for the same baseparameter, but increasing size of truncation, to see, whether we find some rules, which could be exploites for extrapolation.

But thats exactly the beauty and the potential (for investigation) of the method that you can not see a certain structure in the eigenvalues (yet) but somehow it provides a (even real) solution despite.

Quote:Thus the need for a solution of exact eigensystem (infinite size) occurs. If we find one, then again the truncations lead to approximations - but we are in a situation to make statements about the bounds of error etc.
If I have solution to the infinite system why would I bother myself with truncated matrices?

Quote: B * W(B)[col] = d[col] * W(B)[col]
You probably mean
B * W(B)[col] = W(B)[col] * d[col]
but thats just the matrix version of the Schroeder equation .
If you find a solution to the Schroeder equation you make the Carleman/column matrix out of it and you have a solution of this infinite matrix equation and vice versa if you have a solution W(B) to the above matrix equation that is the Carleman/column matrix of a powerseries then this power series is a solution to the Schroeder equation. Nothing new is gained by this consideration.
Reply
#19
bo198214 Wrote:Your other directly infinite approach looks quite as if it is the fixed points approach merely written with matrices. So I wouldnt call it a different approach (compared to the fixed point method).
Yes, since I understood this relation, I started using the term "fixpoint" and had my own approach to its understanding. It was somehow a surprise... ;-)

Quote:One more interesting fact about the (truncated) matrix operator method is that for it even chooses a fixed point! The Eigenvalues of the truncated B converge to the powers of the derivation at the *lower* fixed point! While for they do not converge the powers of the derivation at any (complex) fixed point of . This is striking! Some kind of own intelligence Wink
True. I love procedures, which have their own intelligence (may be more than oneself >;-) )

Quote:
Quote:it is useful for first approximations and gives good results for a certain range of parameters b,x and h (base,top-exponent and height)
What is a bad result in this context?
As I said below. That the degree of approximation is unknown (since the intractable numerical structure of the actual computation of the eigensystem), and that a solution for not so easy b finds nearly arbitrarily erratic eigenvalues - just due to the needs to have a matrix W(truncated(B)) which fit the needs for a perfect match of W*W^-1=I and trunc(B) * W[col] = W[col]*d[col] (where d[col] is a scalar here)

What I did not mention, but was always my greatest problem, is the difficulty to find even an eigensystem, if the size of B is >32 (using Pari/GP) and some >128 using Maple (or matlab - whatever some correspondents in de.sci.mathematik used when they computed one example for me). And for the purpose of a practical implementation of a tetration-module I doubt, that it is the best method to actually solve numerically for an eigensystem for each possibly supplied parameter.

If I have on the other hand the analytical description for the terms of the required eigenmatrices ready, depending on the parameters t and u, it would be a much more straightforward method to provide the terms of the series due to their finite polynomial description, determine the characteristic of divergence/convergence and apply an appropriate method for accelerating convergence, for instance an Euler-summation of appropriate order.

Quote:
Quote:But in my view, this was always only a rough approximation, whose main defect is, that we "don't know about the quality of approximation".

If the approximation converges it is neither rough nor fine (of course this is still not verified if I see it correctly, however the convergence of Andrew's method is also not yet verified, but we can work with it by supposing it and so we can do here).

Hmm, here my impressions from the numerical experiments are disappointing. With b outside the easy range we get the finite image of often slowly convergent series, not to mention the alternating divergent ones, which however may be summed, if enought terms are available. But the numerical eigensystem-solver are sharply restricted regarding to the number of terms - that's the worst problem.

Quote:
Quote:However, the structure of the set of eigenvalues is not as straightforward as one would hope. Especially for bases outside the range e^(-e) < b < e^(1/e) the set of eigenvalues has partially erratic behaviour, which makes it risky to base assumptions about the final values for the tetration-operation T on them.

For instance, I traced the eigenvalues for the same baseparameter, but increasing size of truncation, to see, whether we find some rules, which could be exploites for extrapolation.

But thats exactly the beauty and the potential (for investigation) of the method that you can not see a certain structure in the eigenvalues (yet) but somehow it provides a (even real) solution despite.

As you refer to "beauty", I must second that. I find that really nice :-) and a convincing and much attracting aspect for the novice, who starts exploring matrix-based tetration.


Quote:
Quote:Thus the need for a solution of exact eigensystem (infinite size) occurs. If we find one, then again the truncations lead to approximations - but we are in a situation to make statements about the bounds of error etc.
If I have solution to the infinite system why would I bother myself with truncated matrices?

Have you ever computed an infinite series, which does not have a known closed form? To have "a solution" means only: we have the precept, how to construct the terms of the series. We have then to have a sense for the convergence/divergence-characteristics and based on this may apply a procedure for accelerating convergence appropriate to the finite number of terms, which we have.

Quote:
Quote: B * W(B)[col] = d[col] * W(B)[col]
You probably mean
B * W(B)[col] = W(B)[col] * d[col]
well d[col] is a scalar here (the col'th eigenvalue)

Quote:
but thats just the matrix version of the Schroeder equation .
If you find a solution to the Schroeder equation you make the Carleman/column matrix out of it and you have a solution of this infinite matrix equation and vice versa if you have a solution W(B) to the above matrix equation that is the Carleman/column matrix of a powerseries then this power series is a solution to the Schroeder equation. Nothing new is gained by this consideration.
Hmm, so it is the implementation of the Schroeder-equation. And the "practical" approach via the eigensystem occurs as a good didactical tool for the novice to introduce to the Schroeder-concept, with the final step to construct it for the infinite case. This would be worth a chapter in an introductory book, I think then.

For me, what is new with it, is, that I have the description of terms of the required series (I never could derive them from the articles I found) and am able to actually deal with the needed series. (But I still have the problem, that the fractal power for non-easy bases does not work properly this way - perhaps the bug in my definition of the entries of the eigen-matrices is small)


Also, if the concept of the infinite eigenmatrix holds in a way, that I can apply all matrix-relations analytically, in algebraic equations, then this would also back my conjectures concerning tetration-series, the various types of infinite series of tetration-/powertower-terms, which I presented here. I never saw a discussion of such series so far. Well, my conjectures were concerned with integer bases and/or integer heights in their formulation, and for this the B-matrix suffices in principle. But to settle this for general heights (my second conjecture about the geometric-series analogue) I think an exact infinite eigensystem-description would be required.

Gottfried
Gottfried Helms, Kassel
Reply
#20
bo198214 Wrote:But if we translate this into matrix notation by simply replacing by the matrix multiplication and replacing each function by the corresponding Bell matrix (which is the transposed Carleman matrix) then we see a diagonalization of because the Bell matrix (and also the Carleman matrix) of is just your diagonal matrix !

Hmm, this point disppeared for me, when I read your post first time.
So I could say, that this is already fixed, what I always called "my hypothesis about the set of eigenvalues"? I didn't see something like this in the articles I have access to.

Further it may then be interesting to develop arguments for the degenerate case... In my "increasing size" (of the truncated Bell-matrix) analyses the courious aspect for base , appeared, that the set of eigenvalues show decreasing distances between them but also new eigenvalues pop up, and each new eigenvalue was smaller than the previous. See page "Graph"So there are concurring tendencies - and it may be fruitful to do an analysis, why this/how this could be compatible with the/a limit case, where they are assumed to approach 1 asymptotically.

Gottfried
Gottfried Helms, Kassel
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  The Promised Matrix Add On; Abel_M.gp JmsNxn 2 353 08/21/2021, 03:18 AM
Last Post: JmsNxn
  Revisting my accelerated slog solution using Abel matrix inversion jaydfox 22 29,374 05/16/2021, 11:51 AM
Last Post: Gottfried
  Which method is currently "the best"? MorgothV8 2 7,484 11/15/2013, 03:42 PM
Last Post: MorgothV8
  "Kneser"/Riemann mapping method code for *complex* bases mike3 2 9,803 08/15/2011, 03:14 PM
Last Post: Gottfried
  An incremental method to compute (Abel) matrix inverses bo198214 3 12,806 07/20/2010, 12:13 PM
Last Post: Gottfried
  SAGE code for computing flow matrix for exp(z)-1 jaydfox 4 12,954 08/21/2009, 05:32 PM
Last Post: jaydfox
  regular sexp:different fixpoints Gottfried 6 17,842 08/11/2009, 06:47 PM
Last Post: jaydfox
  Convergence of matrix solution for base e jaydfox 6 13,934 12/18/2007, 12:14 AM
Last Post: jaydfox



Users browsing this thread: 1 Guest(s)