diagonal vs regular
#11
bo198214 Wrote:
Quote:So, in the finite case, for P~ * Bb * P^(-1)~ = X we don't get a triangular X, although it will be exactly "similar" to Bb(truncated), in the sense, that the eigenvalues, eigenvectors etc obey all known rules for similarity-transform.

Sorry, I dont get the point of this. What are "similarity conditions", what are "all known rules for similarity-transform"?


"Similarity transform(ation)" in the sense of linear algebra.

If X = A * B * A^-1 then X is said to be "similar" to A; this means, it has for instance the same eigenvalues. Also this "similarity transform" is transparent for some matrix-functions like powers, exp(), log() etc.

log(X) = A * log(B) * A^-1 .

The special case is, if X is diagonal (by appropriate selection of A), then X contains the eigenvalues of B in its diagonal. And so on.

In the case of infinite dimension we may have, that the inverse is not unique, also we call it the "reciprocal" instead. Say Z defined to be a reciprocal to A, so that
A*Z = I
for the case of infinite size, then we may have different Z with the same reciprocity-relation.
A*Z1 = A*Z2 = A*Z3 =...= I

Also, - what I have learned here - we may have different A for a given B, such that not only

X1 = A1 * B * A1^-1

but also
X2 = A2 * B * A2^-1
...
with X1<>X2<>... and all Xk being diagonal

Then apparently it follows also, that we have multiple diagonalizations resulting in different X1,X2,X3,...
X1 = A1 * B * Z1_1 = A1 * B * Z1_2 = ...
X2 = A2 * B * Z2_1 = A2 * B * Z2_2 = ...
...
(However, I'm asserting the latter just in this notepad-entry the first time, so maybe this is a bit of overgeneralization)
[update 1.5.08] see also example
where I already computed one example for this [/update]

Quote:
Quote:Yes, for the cases of b in the range of convergence.
What is the "range of convergence"?

ehmm... 1/e^e < b < e^(1/e), sorry.

The eigenvalues v_k (k=0..inf) are then |v_k| <= 1 and also v_k=(v_1)^k

Hmm. Because of the case of multiplicity of possible sets of eigenvalues we should perhaps introduce the convention to denote one set as "principal" set, for instance when we use the similarity-transformation to implement the shift to the attracting fixpoint.
Gottfried Helms, Kassel
#12
Gottfried Wrote:"Similarity transform(ation)" in the sense of linear algebra.

If X = A * B * A^-1 then X is said to be "similar" to A; this means, it has for instance the same eigenvalues.

Gosh, that needed really explanation, I thought you were referring to similarity transforms in the geometric sense, i.e. scaling.

Quote:In the case of infinite dimension we may have, that the inverse is not unique, also we call it the "reciprocal" instead. Say Z defined to be a reciprocal to A, so that
A*Z = I
for the case of infinite size, then we may have different Z with the same reciprocity-relation.
A*Z1 = A*Z2 = A*Z3 =...= I

Also, - what I have learned here - we may have different A for a given B, such that not only

X1 = A1 * B * A1^-1

but also
X2 = A2 * B * A2^-1
...
with X1<>X2<>... and all Xk being diagonal

Then apparently it follows also, that we have multiple diagonalizations resulting in different X1,X2,X3,...
X1 = A1 * B * Z1_1 = A1 * B * Z1_2 = ...
X2 = A2 * B * Z2_1 = A2 * B * Z2_2 = ...
...
*nods*, possibly.

Quote:
bo198214 Wrote:So the conjecture is that the eigenvalues of the truncated Carleman/Bell matrix of \( b^x \) converge to the set of powers of \( \ln(a) \) where \( a \) is the lower (the attracting) fixed point of \( b \)?
Yes, for the cases of b in the range of convergence. (maybe some excpetions: b=1 or b=exp(1) or the like)
Quote:
Quote:What is the "range of convergence"?
ehmm... 1/e^e < b < e^(1/e)
I thought b=exp(1) was anyway outside the range of convergence, thatswhy I wanted to be sure what you mean by it. Also your next statement is mysteries assuming that range of convergence.
Quote:For the case of b outside this range I found that always a part of the eigenvalues(truncated matrices) converge to that logarithms, but another part vary wildly;
How can a part of the eigenvalues converge to that logarithms, if there are no real fixed points for \( b>e^{1/e} \) and hence no logarithms of that fixed points?
[/quote]
#13
bo198214 Wrote:I thought b=exp(1) was anyway outside the range of convergence, thatswhy I wanted to be sure what you mean by it. Also your next statement is mysteries assuming that range of convergence.
Well, take it with a grain of salt... Surely you're right, "eigenvalues" of [1,1,1,..,] or [1,-1,1,-1,...] should not verbally included into a statement about convergence...

Quote:
Quote:For the case of b outside this range I found that always a part of the eigenvalues(truncated matrices) converge to that logarithms, but another part vary wildly;
How can a part of the eigenvalues converge to that logarithms, if there are no real fixed points for \( b>e^{1/e} \) and hence no logarithms of that fixed points?

Yes, there we have the two different approaches.

In the analytical view (assuming infinite matrices), there should "no part of the set" with special behaviour occur; I meant this statement in the context of sets of eigenvalues of truncated matrices, and series of such sets, when the size of matrices increases. What I observed was just that: if I ordered the empirical eigenvalues, then parts of them could be identified which stabilized to certain values, while others were wildly varying. Again: eigenvalues computed on the base of finite matrices, with a canned eigensystem-solver. Maybe, that were not the logarithms - have it not in mind currently. I'll look at it later this evening.

[update] I just see, that I was partly in error here. The case described was for bases 1/e^e<b<1. Here we have one part of eigenvalues of each set converging to the expected values. For bases b outside [1/e^e..e^(1/e)] I didn't find documents - I think, they were truely random [/update]

Perhaps you try this with mathematica/maple since I heard, that they are much faster than pari/gp, with which it was nearly impossible/extremely time-expensive to get beyond the size=24x24
Gottfried Helms, Kassel
#14
Gottfried Wrote:I meant this statement in the context of sets of eigenvalues of truncated matrices, and series of such sets, when the size of matrices increases.
Yes I understood it in that way too.

Quote: What I observed was just that: if I ordered the empirical eigenvalues, then parts of them could be identified which stabilized to certain values, while others were wildly varying.
...
I'll look at it later this evening.
You have to demonstrate it. I *dont* think that there are such stable values except perhaps 1.

Quote:Maybe, that were not the logarithms - have it not in mind currently.
They can not, because that are logarithms of fixed points and outside this range there are no fixed points.


Quote:Perhaps you try this with mathematica/maple since I heard, that they are much faster than pari/gp, with which it was nearly impossible/extremely time-expensive to get beyond the size=24x24

As you see from the graphs in this thread I (maple) also only went to size 30. And that only with help of some university machines (took perhaps 10min there), as my home computer wouldnt make it above 25 in reasonable time. But its not only that the diagonalizaton takes so long I also had to adjust the precision in dependence of the matrix size, which again took rounds of computation.
#15
bo198214 Wrote:You have to demonstrate it. I *dont* think that there are such stable values except perhaps 1.
[update] just got aware about the timestamps of our posting. I seem to have edited the previous *after* your msg, see "update". I withdrew the "b out of range of convergence". I was in error with that, it was when 1/e^e<b<1. [/update]
Just uploaded a b=0.5-document, anyway. I've taken size 4x4 to 32x32, 200 or 400 digits float precision(computation ~ aug 07; don't remember the exact used precision)
see Eigenvalues for b=0.5 (truncations)

Quote:As you see from the graphs in this thread I (maple) also only went to size 30. And that only with help of some university machines (took perhaps 10min there), as my home computer wouldnt make it above 25 in reasonable time. But its not only that the diagonalizaton takes so long I also had to adjust the precision in dependence of the matrix size, which again took rounds of computation.
Hmm. In de.sci.mathematik someone told me, that he did it with maple with size 128x128 in some dozen seconds or few minutes. Strange. Maybe he was not correct with the computation. Anyway, that was short before the time when I found an analytical description for the infinite case and I didn't proceed the truncation-path.
Gottfried Helms, Kassel
#16
bo198214 Wrote:I dont know whether they should, but actually there are different. I would say that this has to do with that you need an infinite matrix to do a fixed point shift. While you are actually dealing with a finite matrix.

Yes, I agree with you on this. I think that the problems we have had in the past with shifting, (or \( M[f(x+x_0)-x_0] = M[x-x_0]M[f]M[x+x0] \) and things like this) is due to the fact that we are using finite matrices.

Andrew Robbins
#17
andydude Wrote:
bo198214 Wrote:I dont know whether they should, but actually there are different. I would say that this has to do with that you need an infinite matrix to do a fixed point shift. While you are actually dealing with a finite matrix.

Yes, I agree with you on this. I think that the problems we have had in the past with shifting, (or \( M[f(x+x_0)-x_0] = M[x-x_0]M[f]M[x+x0] \) and things like this) is due to the fact that we are using finite matrices.

Andrew Robbins

Hmm, I'll add some new idea at the end of this msg, but let me first recall some formulae, which I already stated sometimes to shed another light on the reason for different values when using truncated matrices.

---------------------------------------------

In matrix-notation it occurs, that the fixpoint-shift can be expressed by factoring the carleman-matrix into diagonal and triangular-matrices (in the infinite-dimesion case). If Bb is the (transposed) Carleman-matrix for the function f(b,x)= b^x, and V(x)~ * Bb = V(b^x)~ provides the value for the function f(b,x) in the second entry of the result-vector, like

(1) \( \hspace{24} V(x)\sim * B_b = V(b^x)\sim \)

then we can express the fixpoint shift, using that factoring by introducing t, where t^(1/t)=b and u=log(t), and, for shortness Q = P^-1

(2) \( \hspace{24} B_b = dV(1/t)* Q \sim * U_t * P\sim * dV(t) \)

where again

(3) \( \hspace{24} U_t = dV(u)* U \)

and U is the factorial scaled matrix of Stirling-numbers 2'nd kind

(4) \( \hspace{24} U = dF^{-1}* S_2 * dF \)

such that U_t is the Bell-matrix for t^x - 1.

In (2) the matrix-product Q~ * U_t for each entry of the result we had to compute infinite series, for which we could insert analytical results if the implicite series-evaluation are based on convergent series.

Different results may occur depending on the order of evaluation of the whole expression (combined (1) and (2)) because of truncation to finite size

(5) \( \hspace{24} V(x)\sim * ( dV(1/t)* Q\sim * U_t * P\sim * dV(t)) = V(b^x)\sim \)

By the binomial-theorem we can evaluate the first three factors first, indicated by the parentheses

(6) \( \hspace{24} (V(x)\sim * dV(1/t)* Q~) * U_t * (P~ * dV(t)) = V(b^x)\sim \)

giving the matrix-term
(7) \( \hspace{24} V(x)\sim * dV(1/t)*Q~ = V(x/t-1)\sim = V(x')\sim \)

and for the result, introducing
\( \hspace{24} x' = \frac{x}{t} -1 , (b^x)' = \frac{b^x}{t} - 1 \)

and analoguously using the binomial-theorem

(Cool \( \hspace{24} V(x')\sim * U_t*P\sim*dV(t) = V(b^x)\sim \)
(9) \( \hspace{24} V(x')\sim * U_t = V(b^x)\sim *dV(1/t)*Q\sim \)

(10) \( \hspace{24} V(x')\sim * U_t = V((b^x)')\sim \)

which is the matrix-expression for the fixpoint shift x' = x/t - 1

Note, that using powers of P, precisely the t'th-power we may write

(11) \( \hspace{24} (V(x)\sim * Q^t\sim) * dV(1/t)*U_t *dV(t)* P^t\sim = V(b^x)\sim \)
(12) \( \hspace{24} (V(x)\sim * Q^t\sim) * dV(1/t)*U_t *dV(t) = V(b^x)\sim * Q^t\sim \)

(13) \( \hspace{24} V(x-t)\sim * (dV(1/t)*U_t *dV(t)) = V(b^x-t)\sim \)
thus using the fixpoint-shift x" = x - t


In the matrix-noation one can see, that the use of associativity changes the *position* of the (unavoidable) error due to truncation, and that the change of the finaly quantity of the error depends on this position.

-----------------------------------------------------------

But this all just a reminder, since I've discussed it already elsewhere.

Meditating Wink on Henryk's and Andrew's posts I possibly found a key (or a first step), how to approach the problem of differing values for fractional iterates depending on the selection of fixpoints.

First note, that for b>e^(1/e) the value for u is u>1 .

Second, since in
\( \hspace{24} U_t = dV(u)* U \)
where \( U = dF^{-1}*S_2 * dF \)
we have factorials in the denominator of the second column of U, and dV(u) provides only growth of geometric rate, column 2 contains entries, which consitute a convergent series for all u and any finite integer height (expressed by the same power of U_t), when matrix-multiplied with Q~.

But this is not so with fractional heights: then the entries in the second column diverge strongly, and by inspection of the representation based on the symbolic-eigensystem decomposition, we seem to have a growthrate of order exp(k^2)/k! for the k'th entry.

The sequence of entries seems to follow a pattern of initial decrease, arriving at a minimum at some index k and then a tail of infinite increase, something like
"d d d d m i i i i ..." where "d" indicates decrease, "m" the minimum and "i" increase.
(See for instance http://go.helms-net.de/math/tetdocs/html...ration.htm)

For half-integer heights h the "m" position occurs very early; if the fractional height h approaches integer values, the position moves to the tail, and for integer heights we may now say: it disappears "into the infinity" such that we have a convergent series for integer heights.

But the matrix-multiplication of Q~ * U_t^h , which expresses the summation of the "d d ... d m i i i ... " terms cofactored by the row-entries in Q~ is derived on the assumtion that we have a valid similarity-transform
(14) \( \hspace{24} B_b = (dV(1/t)* Q\sim) * U_t * (P\sim *dV(t)) \)
which is, using \( X = P\sim * dV(t) \),
(15) \( \hspace{24} B_b = X^{-1} * U_t * X \)

But if X^-1 * U_t involves evaluation of divergent series, then we do not really know, whether the row-by-column-multiplication constitute appropriate values to assure, that X^-1 * U_t * X is really a matrix-similarity transform, for instance, that the results of row-by-column-multiplications in X^-1 * U_t follow the same pattern as it occurs, when the involved series are convergent.

And since the formal matrix-multiplications are one-to-one with the functional representations of the fixpoint-shift, we may derive a caveat from here concerning the fixpoint-shift when applied to fractional "heights", as far as divergent summation of powerseries is implicated.

Just a note, I'll play with this a bit more to see, whether it is relevant/correct so far.

Gottfried
Gottfried Helms, Kassel


Possibly Related Threads…
Thread Author Replies Views Last Post
  regular vs intuitive (formerly: natural) bo198214 7 21,363 06/24/2010, 11:37 AM
Last Post: Gottfried
  regular sexp: curve near h=-2 (h=-2 + eps*I) Gottfried 2 11,328 03/10/2010, 07:52 AM
Last Post: Gottfried
  regular sexp:different fixpoints Gottfried 6 23,156 08/11/2009, 06:47 PM
Last Post: jaydfox
  small base b=0.04 via regular iteration and repelling fixpoint Gottfried 0 5,176 06/26/2009, 09:59 AM
Last Post: Gottfried
  diagonal vs natural bo198214 2 8,508 05/01/2008, 01:37 PM
Last Post: bo198214



Users browsing this thread: 1 Guest(s)