Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
diagonal vs regular
#1
Computed with precision 30 digits for rslog and various matrix sizes (with adapted precision) for dsexp.

Here the graphs of depending on the matrix size of dsexp:
   
   

This again looks very promising that diagonal and regular tetration are equal.
Reply
#2
Though it might appear already boring here the curves with instead of .
   
   
Reply
#3
I'm still confused as to why these are different. It seems to me that "dsexp" and "rsexp" should be identical everywhere, but it seems you are comparing "dsexp" and "rslog" which may indicate that the error is coming only from series inversion?

Andrew Robbins
Reply
#4
andydude Wrote:I'm still confused as to why these are different. It seems to me that "dsexp" and "rsexp" should be identical everywhere,

No, absolutely not. Though you can regard rsexp as diagonalization method, the difference is *where* you apply it.
dsexp is applied to 0, while rsexp is applied to the lower fixed point.
It is not clear at all that the result of the diagonalization method does not depend on the development point. In the same way as it is not clear that the natural Abel method does not depend on the development point.

If you apply the diagonalization method to the fixed point you get nice triangular matrices with the powers of on the diagonal (the eigenvalues). However if you apply it to 0, you neither have triangular matrices nor are the eigenvalues powers of something (edit: though I should check the latter before asserting).

You can always apply the diagonalization method to analytic functions, also if they have no real fixed point (e.g. ).

Also I dont know of a way yet how to compute the result of the diagonalization method (at a non-fixed point) without powerseries coefficients, i.e. with an iterative formula like there is for the regular iteration.
Reply
#5
bo198214 Wrote:If you apply the diagonalization method to the fixed point you get nice triangular matrices with the powers of on the diagonal (the eigenvalues). However if you apply it to 0, you neither have triangular matrices nor are the eigenvalues powers of something (edit: though I should check the latter before asserting).

Hmm, the fixpoints-shifts resulted in similarity scalings with the matrix-method (using powers of the pascalmatrix). Then the eigenvalues should be unchanged.

Gottfried
Gottfried Helms, Kassel
Reply
#6
Gottfried Wrote:Hmm, the fixpoints-shifts resulted in similarity scalings with the matrix-method (using powers of the pascalmatrix). Then the eigenvalues should be unchanged.

I dont know whether they should, but actually there are different. I would say that this has to do with that you need an infinite matrix to do a fixed point shift. While you are actually dealing with a finite matrix.

Ok, but perhaps examples are more convincing:
Let us first consider the Carleman matrix (wow I see Andrew has done righteous work in writing this wikipedia article) of at 0:

truncated to 5:


The eigenvalues of this matrix are:
1.0000000000
0.6269572883
0.2495195174
0.0482258803
0.0033138436
I think they are not powers of anything. And they change when increasing the matrix size.

Now consider the matrix at the lower fixed point 2, i.e. the matrix of .



The eigenvalues are the entries on the diagonal, they are powers of 0.6931471806. Numerically
1.0000000000
0.6931471806
0.4804530140
0.3330246520
0.2308350986
When increasing the matrix size only new powers become added, so these eigenvalues are invariante to matrix increase.
Reply
#7
bo198214 Wrote:
Gottfried Wrote:Hmm, the fixpoints-shifts resulted in similarity scalings with the matrix-method (using powers of the pascalmatrix). Then the eigenvalues should be unchanged.

I dont know whether they should, but actually there are different. I would say that this has to do with that you need an infinite matrix to do a fixed point shift. While you are actually dealing with a finite matrix.
... are different, if truncated? If so, ok.
My conversion/factorizing of the infinite square-matrix into a triangular one by similarity scaling (using powers of the pascal-matrix) applies to the infinite case. I'd like to see, how the eigenvalues for the infinite square-matrix are determined otherwise.
When I started my tetration-discussion based on diagonalization, I considered the sequences of sets of eigenvalues for truncated matrices with increasing size, where the parameter b for f:=b^x was in the "range of convergence", so approximations to the limit-case for the sets of eigenvalues made sense. See for instance sets of eigenvalues b=1.7^(1/1.7)
The approximation to a sequence of powers of a constant (here of log(1.7)) was the reason for my hypothesis, that this is in principle also true for the parameters b out of the "range of convergence"
Gottfried Helms, Kassel
Reply
#8
Gottfried Wrote:My conversion/factorizing of the infinite square-matrix into a triangular one by similarity scaling (using powers of the pascal-matrix) applies to the infinite case. I'd like to see, how the eigenvalues for the infinite square-matrix are determined otherwise.

Gottfried, we discussed that already. There is no diagonalization uniqueness for infinite matrices. For example the infinite Carleman/Bell Matrix of can be diagonalized with eigenvalues being powers of and with eigenvalues being powers of .

What we however have is a unique diagonalization (up to swapping eigenvalues of course) of the finite approximating matrices. This method is always applicable (especially also to functions with no fixed points, like ) and clearly defined.

Quote:When I started my tetration-discussion based on diagonalization, I considered the sequences of sets of eigenvalues for truncated matrices with increasing size, where the parameter b for f:=b^x was in the "range of convergence", so approximations to the limit-case for the sets of eigenvalues made sense. See for instance sets of eigenvalues b=1.7^(1/1.7)

So the conjecture is that the eigenvalues of the truncated Carleman/Bell matrix of converge to the set of powers of where is the lower (the attracting) fixed point of ?

Quote:The approximation to a sequence of powers of a constant (here of log(1.7)) was the reason for my hypothesis, that this is in principle also true for the parameters b out of the "range of convergence"

Yes, as I demonstrated in "diagonal vs natural" it makes absolutly sense to consider where is the Carleman/Bell-Matrix of . Even if the Eigenvalues behave strangely (dont converge to powers of anything) you have for each truncated matrix a unique real -th power (how powers or generally holomorphic functions are applied to matrices by diagonalization is explained here) and the values on the first row/column of this power converge with increasing matrix size.
Reply
#9
Well, the problem of non-uniqueness in the infinite case: Yes - we settled this already.

My emphasis in my msg was the similarity-scaling, which matrix-implements the fixpoint-shift: this is only correct for the infinite size.

If we have, for the Bell-matrix Bb for base b

Bb = P^(-1)~ * X * P~

or
P~ * Bb * P^(-1)~ = X (such that X can be used by similarity-conditions)

we have X triangular only in the infinite case, since the rows of P~ matrix-multiplied by columns of Bb must be infinite to give correct results.
For the truncated case the approximations are getting worse with lower rows in P~ , and in the last row we even have only one single term, which gives nearly no approximation at all. The quality of approximation is then depending on the rate of decrease of terms in the columns in Bb (which, actually, is always given, but may be sufficient only at "late" indexes - too late to have enough approximation in the truncated case; well, with b=sqrt(2) we may go with truncation of size 32x32 and approximations to 6,8 or even more digits)

This is not so with the right part Bb * P^(-1)~ : there we have finite expressions for each entry of the result and can provide accuracy to arbitrary degree - independent of the chosen size of truncation.

So, in the finite case, for P~ * Bb * P^(-1)~ = X we don't get a triangular X, although it will be exactly "similar" to Bb(truncated), in the sense, that the eigenvalues, eigenvectors etc obey all known rules for similarity-transform.

Again - I didn't want to reconsider the problem of non-uniqueness here.
bo198214 Wrote:So the conjecture is that the eigenvalues of the truncated Carleman/Bell matrix of converge to the set of powers of where is the lower (the attracting) fixed point of ?
Yes, for the cases of b in the range of convergence. (maybe some excpetions: b=1 or b=exp(1) or the like)
For the case of b outside this range I found that always a part of the eigenvalues(truncated matrices) converge to that logarithms, but another part vary wildly; this may be explained, because a truncated matrix cannot have an infinite eigenvalue, so the set of empirical eigenvalues must be somehow, which describes "best" the general behave of the matrix in operations with finite size
Here is another example (where I used s instead of b), look at pages "eigenvalues at critical s" and "eigenvalues for s=e^e^-1". I have some more examples for other bases and the variety of sets of occuring eigenvalues with increasing matrix-size and could provide them here, if you like.


(P.s. Where I use the simple P in the first part of this msg - this is only a sketch. In fact we need the appropriate t'th power of P, where t^(1/t) = b )
Gottfried Helms, Kassel
Reply
#10
Gottfried Wrote:Well, the problem of non-uniqueness in the infinite case: Yes - we settled this already.

My emphasis in my msg was the similarity-scaling, which matrix-implements the fixpoint-shift: this is only correct for the infinite size.

If we have, for the Bell-matrix Bb for base b

Bb = P^(-1)~ * X * P~

or
P~ * Bb * P^(-1)~ = X (such that X can be used by similarity-conditions)

we have X triangular only in the infinite case, since the rows of P~ matrix-multiplied by columns of Bb must be infinite to give correct results.
For the truncated case the approximations are getting worse with lower rows in P~ , and in the last row we even have only one single term, which gives nearly no approximation at all. The quality of approximation is then depending on the rate of decrease of terms in the columns in Bb (which, actually, is always given, but may be sufficient only at "late" indexes - too late to have enough approximation in the truncated case; well, with b=sqrt(2) we may go with truncation of size 32x32 and approximations to 6,8 or even more digits)

This is not so with the right part Bb * P^(-1)~ : there we have finite expressions for each entry of the result and can provide accuracy to arbitrary degree - independent of the chosen size of truncation.

So, in the finite case, for P~ * Bb * P^(-1)~ = X we don't get a triangular X, although it will be exactly "similar" to Bb(truncated), in the sense, that the eigenvalues, eigenvectors etc obey all known rules for similarity-transform.

Sorry, I dont get the point of this. What are "similarity conditions", what are "all known rules for similarity-transform"?

Quote:Yes, for the cases of b in the range of convergence.
What is the "range of convergence"?
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  regular vs intuitive (formerly: natural) bo198214 7 11,058 06/24/2010, 11:37 AM
Last Post: Gottfried
  regular sexp: curve near h=-2 (h=-2 + eps*I) Gottfried 2 6,020 03/10/2010, 07:52 AM
Last Post: Gottfried
  regular sexp:different fixpoints Gottfried 6 11,909 08/11/2009, 06:47 PM
Last Post: jaydfox
  small base b=0.04 via regular iteration and repelling fixpoint Gottfried 0 2,439 06/26/2009, 09:59 AM
Last Post: Gottfried
  diagonal vs natural bo198214 2 4,501 05/01/2008, 01:37 PM
Last Post: bo198214



Users browsing this thread: 1 Guest(s)