• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 diagonal vs regular Gottfried Ultimate Fellow     Posts: 767 Threads: 119 Joined: Aug 2007 04/29/2008, 01:31 PM (This post was last modified: 05/01/2008, 09:18 PM by Gottfried.) bo198214 Wrote:Quote:So, in the finite case, for P~ * Bb * P^(-1)~ = X we don't get a triangular X, although it will be exactly "similar" to Bb(truncated), in the sense, that the eigenvalues, eigenvectors etc obey all known rules for similarity-transform. Sorry, I dont get the point of this. What are "similarity conditions", what are "all known rules for similarity-transform"? "Similarity transform(ation)" in the sense of linear algebra. If X = A * B * A^-1 then X is said to be "similar" to A; this means, it has for instance the same eigenvalues. Also this "similarity transform" is transparent for some matrix-functions like powers, exp(), log() etc. log(X) = A * log(B) * A^-1 . The special case is, if X is diagonal (by appropriate selection of A), then X contains the eigenvalues of B in its diagonal. And so on. In the case of infinite dimension we may have, that the inverse is not unique, also we call it the "reciprocal" instead. Say Z defined to be a reciprocal to A, so that A*Z = I for the case of infinite size, then we may have different Z with the same reciprocity-relation. A*Z1 = A*Z2 = A*Z3 =...= I Also, - what I have learned here - we may have different A for a given B, such that not only X1 = A1 * B * A1^-1 but also X2 = A2 * B * A2^-1 ... with X1<>X2<>... and all Xk being diagonal Then apparently it follows also, that we have multiple diagonalizations resulting in different X1,X2,X3,... X1 = A1 * B * Z1_1 = A1 * B * Z1_2 = ... X2 = A2 * B * Z2_1 = A2 * B * Z2_2 = ... ... (However, I'm asserting the latter just in this notepad-entry the first time, so maybe this is a bit of overgeneralization) [update 1.5.08] see also example where I already computed one example for this [/update] Quote:Quote:Yes, for the cases of b in the range of convergence. What is the "range of convergence"? ehmm... 1/e^e < b < e^(1/e), sorry. The eigenvalues v_k (k=0..inf) are then |v_k| <= 1 and also v_k=(v_1)^k Hmm. Because of the case of multiplicity of possible sets of eigenvalues we should perhaps introduce the convention to denote one set as "principal" set, for instance when we use the similarity-transformation to implement the shift to the attracting fixpoint. Gottfried Helms, Kassel bo198214 Administrator Posts: 1,389 Threads: 90 Joined: Aug 2007 04/29/2008, 02:16 PM Gottfried Wrote:"Similarity transform(ation)" in the sense of linear algebra. If X = A * B * A^-1 then X is said to be "similar" to A; this means, it has for instance the same eigenvalues. Gosh, that needed really explanation, I thought you were referring to similarity transforms in the geometric sense, i.e. scaling. Quote:In the case of infinite dimension we may have, that the inverse is not unique, also we call it the "reciprocal" instead. Say Z defined to be a reciprocal to A, so that A*Z = I for the case of infinite size, then we may have different Z with the same reciprocity-relation. A*Z1 = A*Z2 = A*Z3 =...= I Also, - what I have learned here - we may have different A for a given B, such that not only X1 = A1 * B * A1^-1 but also X2 = A2 * B * A2^-1 ... with X1<>X2<>... and all Xk being diagonal Then apparently it follows also, that we have multiple diagonalizations resulting in different X1,X2,X3,... X1 = A1 * B * Z1_1 = A1 * B * Z1_2 = ... X2 = A2 * B * Z2_1 = A2 * B * Z2_2 = ... ... *nods*, possibly. Quote:bo198214 Wrote:So the conjecture is that the eigenvalues of the truncated Carleman/Bell matrix of converge to the set of powers of where is the lower (the attracting) fixed point of ? Yes, for the cases of b in the range of convergence. (maybe some excpetions: b=1 or b=exp(1) or the like)Quote:Quote:What is the "range of convergence"? ehmm... 1/e^e < b < e^(1/e) I thought b=exp(1) was anyway outside the range of convergence, thatswhy I wanted to be sure what you mean by it. Also your next statement is mysteries assuming that range of convergence. Quote:For the case of b outside this range I found that always a part of the eigenvalues(truncated matrices) converge to that logarithms, but another part vary wildly; How can a part of the eigenvalues converge to that logarithms, if there are no real fixed points for and hence no logarithms of that fixed points? [/quote] Gottfried Ultimate Fellow     Posts: 767 Threads: 119 Joined: Aug 2007 04/29/2008, 03:58 PM (This post was last modified: 04/29/2008, 06:46 PM by Gottfried.) bo198214 Wrote:I thought b=exp(1) was anyway outside the range of convergence, thatswhy I wanted to be sure what you mean by it. Also your next statement is mysteries assuming that range of convergence.Well, take it with a grain of salt... Surely you're right, "eigenvalues" of [1,1,1,..,] or [1,-1,1,-1,...] should not verbally included into a statement about convergence... Quote:Quote:For the case of b outside this range I found that always a part of the eigenvalues(truncated matrices) converge to that logarithms, but another part vary wildly; How can a part of the eigenvalues converge to that logarithms, if there are no real fixed points for and hence no logarithms of that fixed points? Yes, there we have the two different approaches. In the analytical view (assuming infinite matrices), there should "no part of the set" with special behaviour occur; I meant this statement in the context of sets of eigenvalues of truncated matrices, and series of such sets, when the size of matrices increases. What I observed was just that: if I ordered the empirical eigenvalues, then parts of them could be identified which stabilized to certain values, while others were wildly varying. Again: eigenvalues computed on the base of finite matrices, with a canned eigensystem-solver. Maybe, that were not the logarithms - have it not in mind currently. I'll look at it later this evening. [update] I just see, that I was partly in error here. The case described was for bases 1/e^ee^(1/e) the value for u is u>1 . Second, since in where we have factorials in the denominator of the second column of U, and dV(u) provides only growth of geometric rate, column 2 contains entries, which consitute a convergent series for all u and any finite integer height (expressed by the same power of U_t), when matrix-multiplied with Q~. But this is not so with fractional heights: then the entries in the second column diverge strongly, and by inspection of the representation based on the symbolic-eigensystem decomposition, we seem to have a growthrate of order exp(k^2)/k! for the k'th entry. The sequence of entries seems to follow a pattern of initial decrease, arriving at a minimum at some index k and then a tail of infinite increase, something like "d d d d m i i i i ..." where "d" indicates decrease, "m" the minimum and "i" increase. (See for instance http://go.helms-net.de/math/tetdocs/html...ration.htm) For half-integer heights h the "m" position occurs very early; if the fractional height h approaches integer values, the position moves to the tail, and for integer heights we may now say: it disappears "into the infinity" such that we have a convergent series for integer heights. But the matrix-multiplication of Q~ * U_t^h , which expresses the summation of the "d d ... d m i i i ... " terms cofactored by the row-entries in Q~ is derived on the assumtion that we have a valid similarity-transform (14) which is, using , (15) But if X^-1 * U_t involves evaluation of divergent series, then we do not really know, whether the row-by-column-multiplication constitute appropriate values to assure, that X^-1 * U_t * X is really a matrix-similarity transform, for instance, that the results of row-by-column-multiplications in X^-1 * U_t follow the same pattern as it occurs, when the involved series are convergent. And since the formal matrix-multiplications are one-to-one with the functional representations of the fixpoint-shift, we may derive a caveat from here concerning the fixpoint-shift when applied to fractional "heights", as far as divergent summation of powerseries is implicated. Just a note, I'll play with this a bit more to see, whether it is relevant/correct so far. Gottfried Gottfried Helms, Kassel « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post regular vs intuitive (formerly: natural) bo198214 7 12,704 06/24/2010, 11:37 AM Last Post: Gottfried regular sexp: curve near h=-2 (h=-2 + eps*I) Gottfried 2 6,825 03/10/2010, 07:52 AM Last Post: Gottfried regular sexp:different fixpoints Gottfried 6 13,566 08/11/2009, 06:47 PM Last Post: jaydfox small base b=0.04 via regular iteration and repelling fixpoint Gottfried 0 2,803 06/26/2009, 09:59 AM Last Post: Gottfried diagonal vs natural bo198214 2 5,199 05/01/2008, 01:37 PM Last Post: bo198214

Users browsing this thread: 1 Guest(s) 