Posts: 1,395
Threads: 91
Joined: Aug 2007
04/27/2008, 02:50 PM
(This post was last modified: 04/27/2008, 02:51 PM by bo198214.)
Computed with precision 30 digits for rslog and various matrix sizes (with adapted precision) for dsexp.
Here the graphs of depending on the matrix size of dsexp:
This again looks very promising that diagonal and regular tetration are equal.
Posts: 1,395
Threads: 91
Joined: Aug 2007
Though it might appear already boring here the curves with instead of .
Posts: 509
Threads: 44
Joined: Aug 2007
I'm still confused as to why these are different. It seems to me that "dsexp" and "rsexp" should be identical everywhere, but it seems you are comparing "dsexp" and "rslog" which may indicate that the error is coming only from series inversion?
Andrew Robbins
Posts: 1,395
Threads: 91
Joined: Aug 2007
04/28/2008, 06:33 PM
(This post was last modified: 04/28/2008, 07:06 PM by bo198214.)
andydude Wrote:I'm still confused as to why these are different. It seems to me that "dsexp" and "rsexp" should be identical everywhere,
No, absolutely not. Though you can regard rsexp as diagonalization method, the difference is *where* you apply it.
dsexp is applied to 0, while rsexp is applied to the lower fixed point.
It is not clear at all that the result of the diagonalization method does not depend on the development point. In the same way as it is not clear that the natural Abel method does not depend on the development point.
If you apply the diagonalization method to the fixed point you get nice triangular matrices with the powers of on the diagonal (the eigenvalues). However if you apply it to 0, you neither have triangular matrices nor are the eigenvalues powers of something (edit: though I should check the latter before asserting).
You can always apply the diagonalization method to analytic functions, also if they have no real fixed point (e.g. ).
Also I dont know of a way yet how to compute the result of the diagonalization method (at a nonfixed point) without powerseries coefficients, i.e. with an iterative formula like there is for the regular iteration.
Posts: 789
Threads: 121
Joined: Aug 2007
bo198214 Wrote:If you apply the diagonalization method to the fixed point you get nice triangular matrices with the powers of on the diagonal (the eigenvalues). However if you apply it to 0, you neither have triangular matrices nor are the eigenvalues powers of something (edit: though I should check the latter before asserting).
Hmm, the fixpointsshifts resulted in similarity scalings with the matrixmethod (using powers of the pascalmatrix). Then the eigenvalues should be unchanged.
Gottfried
Gottfried Helms, Kassel
Posts: 1,395
Threads: 91
Joined: Aug 2007
04/29/2008, 07:18 AM
(This post was last modified: 04/29/2008, 07:46 AM by bo198214.)
Gottfried Wrote:Hmm, the fixpointsshifts resulted in similarity scalings with the matrixmethod (using powers of the pascalmatrix). Then the eigenvalues should be unchanged.
I dont know whether they should, but actually there are different. I would say that this has to do with that you need an infinite matrix to do a fixed point shift. While you are actually dealing with a finite matrix.
Ok, but perhaps examples are more convincing:
Let us first consider the Carleman matrix (wow I see Andrew has done righteous work in writing this wikipedia article) of at 0:
truncated to 5:
The eigenvalues of this matrix are:
1.0000000000
0.6269572883
0.2495195174
0.0482258803
0.0033138436
I think they are not powers of anything. And they change when increasing the matrix size.
Now consider the matrix at the lower fixed point 2, i.e. the matrix of .
The eigenvalues are the entries on the diagonal, they are powers of 0.6931471806. Numerically
1.0000000000
0.6931471806
0.4804530140
0.3330246520
0.2308350986
When increasing the matrix size only new powers become added, so these eigenvalues are invariante to matrix increase.
Posts: 789
Threads: 121
Joined: Aug 2007
04/29/2008, 10:46 AM
(This post was last modified: 04/29/2008, 10:50 AM by Gottfried.)
bo198214 Wrote:Gottfried Wrote:Hmm, the fixpointsshifts resulted in similarity scalings with the matrixmethod (using powers of the pascalmatrix). Then the eigenvalues should be unchanged.
I dont know whether they should, but actually there are different. I would say that this has to do with that you need an infinite matrix to do a fixed point shift. While you are actually dealing with a finite matrix. ... are different, if truncated? If so, ok.
My conversion/factorizing of the infinite squarematrix into a triangular one by similarity scaling (using powers of the pascalmatrix) applies to the infinite case. I'd like to see, how the eigenvalues for the infinite squarematrix are determined otherwise.
When I started my tetrationdiscussion based on diagonalization, I considered the sequences of sets of eigenvalues for truncated matrices with increasing size, where the parameter b for f:=b^x was in the "range of convergence", so approximations to the limitcase for the sets of eigenvalues made sense. See for instance sets of eigenvalues b=1.7^(1/1.7)
The approximation to a sequence of powers of a constant (here of log(1.7)) was the reason for my hypothesis, that this is in principle also true for the parameters b out of the "range of convergence"
Gottfried Helms, Kassel
Posts: 1,395
Threads: 91
Joined: Aug 2007
04/29/2008, 11:15 AM
(This post was last modified: 04/29/2008, 11:31 AM by bo198214.)
Gottfried Wrote:My conversion/factorizing of the infinite squarematrix into a triangular one by similarity scaling (using powers of the pascalmatrix) applies to the infinite case. I'd like to see, how the eigenvalues for the infinite squarematrix are determined otherwise.
Gottfried, we discussed that already. There is no diagonalization uniqueness for infinite matrices. For example the infinite Carleman/Bell Matrix of can be diagonalized with eigenvalues being powers of and with eigenvalues being powers of .
What we however have is a unique diagonalization (up to swapping eigenvalues of course) of the finite approximating matrices. This method is always applicable (especially also to functions with no fixed points, like ) and clearly defined.
Quote:When I started my tetrationdiscussion based on diagonalization, I considered the sequences of sets of eigenvalues for truncated matrices with increasing size, where the parameter b for f:=b^x was in the "range of convergence", so approximations to the limitcase for the sets of eigenvalues made sense. See for instance sets of eigenvalues b=1.7^(1/1.7)
So the conjecture is that the eigenvalues of the truncated Carleman/Bell matrix of converge to the set of powers of where is the lower (the attracting) fixed point of ?
Quote:The approximation to a sequence of powers of a constant (here of log(1.7)) was the reason for my hypothesis, that this is in principle also true for the parameters b out of the "range of convergence"
Yes, as I demonstrated in " diagonal vs natural" it makes absolutly sense to consider where is the Carleman/BellMatrix of . Even if the Eigenvalues behave strangely (dont converge to powers of anything) you have for each truncated matrix a unique real th power (how powers or generally holomorphic functions are applied to matrices by diagonalization is explained here) and the values on the first row/column of this power converge with increasing matrix size.
Posts: 789
Threads: 121
Joined: Aug 2007
04/29/2008, 11:57 AM
(This post was last modified: 04/29/2008, 12:27 PM by Gottfried.)
Well, the problem of nonuniqueness in the infinite case: Yes  we settled this already.
My emphasis in my msg was the similarityscaling, which matriximplements the fixpointshift: this is only correct for the infinite size.
If we have, for the Bellmatrix Bb for base b
Bb = P^(1)~ * X * P~
or
P~ * Bb * P^(1)~ = X (such that X can be used by similarityconditions)
we have X triangular only in the infinite case, since the rows of P~ matrixmultiplied by columns of Bb must be infinite to give correct results.
For the truncated case the approximations are getting worse with lower rows in P~ , and in the last row we even have only one single term, which gives nearly no approximation at all. The quality of approximation is then depending on the rate of decrease of terms in the columns in Bb (which, actually, is always given, but may be sufficient only at "late" indexes  too late to have enough approximation in the truncated case; well, with b=sqrt(2) we may go with truncation of size 32x32 and approximations to 6,8 or even more digits)
This is not so with the right part Bb * P^(1)~ : there we have finite expressions for each entry of the result and can provide accuracy to arbitrary degree  independent of the chosen size of truncation.
So, in the finite case, for P~ * Bb * P^(1)~ = X we don't get a triangular X, although it will be exactly "similar" to Bb(truncated), in the sense, that the eigenvalues, eigenvectors etc obey all known rules for similaritytransform.
Again  I didn't want to reconsider the problem of nonuniqueness here.
bo198214 Wrote:So the conjecture is that the eigenvalues of the truncated Carleman/Bell matrix of converge to the set of powers of where is the lower (the attracting) fixed point of ? Yes, for the cases of b in the range of convergence. (maybe some excpetions: b=1 or b=exp(1) or the like)
For the case of b outside this range I found that always a part of the eigenvalues(truncated matrices) converge to that logarithms, but another part vary wildly; this may be explained, because a truncated matrix cannot have an infinite eigenvalue, so the set of empirical eigenvalues must be somehow, which describes "best" the general behave of the matrix in operations with finite size
Here is another example (where I used s instead of b), look at pages "eigenvalues at critical s" and "eigenvalues for s=e^e^1". I have some more examples for other bases and the variety of sets of occuring eigenvalues with increasing matrixsize and could provide them here, if you like.
(P.s. Where I use the simple P in the first part of this msg  this is only a sketch. In fact we need the appropriate t'th power of P, where t^(1/t) = b )
Gottfried Helms, Kassel
Posts: 1,395
Threads: 91
Joined: Aug 2007
Gottfried Wrote:Well, the problem of nonuniqueness in the infinite case: Yes  we settled this already.
My emphasis in my msg was the similarityscaling, which matriximplements the fixpointshift: this is only correct for the infinite size.
If we have, for the Bellmatrix Bb for base b
Bb = P^(1)~ * X * P~
or
P~ * Bb * P^(1)~ = X (such that X can be used by similarityconditions)
we have X triangular only in the infinite case, since the rows of P~ matrixmultiplied by columns of Bb must be infinite to give correct results.
For the truncated case the approximations are getting worse with lower rows in P~ , and in the last row we even have only one single term, which gives nearly no approximation at all. The quality of approximation is then depending on the rate of decrease of terms in the columns in Bb (which, actually, is always given, but may be sufficient only at "late" indexes  too late to have enough approximation in the truncated case; well, with b=sqrt(2) we may go with truncation of size 32x32 and approximations to 6,8 or even more digits)
This is not so with the right part Bb * P^(1)~ : there we have finite expressions for each entry of the result and can provide accuracy to arbitrary degree  independent of the chosen size of truncation.
So, in the finite case, for P~ * Bb * P^(1)~ = X we don't get a triangular X, although it will be exactly "similar" to Bb(truncated), in the sense, that the eigenvalues, eigenvectors etc obey all known rules for similaritytransform.
Sorry, I dont get the point of this. What are "similarity conditions", what are "all known rules for similaritytransform"?
Quote:Yes, for the cases of b in the range of convergence.
What is the "range of convergence"?
