Tetration Forum

Full Version: sum of log of eigenvalues of Carleman matrix
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hey Gottfried,

did you notice that the sum over the logarithms of the eigenvalues of the Carleman matrix of exp converge (for increasing matrix size)?
Moreover also if you take the n-th power of the logarithms, for any n.
This woudl be a direct consequence of the matrix power method (for non-integer iteration of exp) converging to an analytic function.
(08/28/2009, 11:27 AM)bo198214 Wrote: [ -> ]Hey Gottfried,

did you notice that the sum over the logarithms of the eigenvalues of the Carleman matrix of exp converge (for increasing matrix size)?
Moreover also if you take the n-th power of the logarithms, for any n.
This woudl be a direct consequence of the matrix power method (for non-integer iteration of exp) converging to an analytic function.
Hmm, for dim=8..24 I get them always near null at machine-precision (Pari/GP, 200digit or 800 digits internal prec). That means the product of the eigenvalues is near 1 no matter what dimension I select. There may be an error, however the procedure is simple. Here is the Pari/Gp code
Code:
fmt(800,12)
{for(dim=8,24,
   B = VE(fS2F,dim)*VE(P,dim)~ ;  \\ construct the Bell(transp. Carlemann)-matrix for exp(x)
   tmpW = mateigen(B);             \\ getting the eigenvectors in tmpW

   tmpD=HadDiv(B*tmpW,tmpW)[1,];   \\ getting the eigenvalues in tmpD
                                   \\ this is simpler than the "official"
                                   \\ method: tmpD = diag(tmpW^-1 * B * tmpW)

   sulog = sum(k=1,#tmpD,log(tmpD[k]));
   print(dim," ",sulog);
)}

8 -3.255463966 E-808
9 -9.22381457 E-808
10 4.88319595 E-808
11 2.821402104 E-807
12 6.51092793 E-808
13 -8.35569084 E-807
14 -1.519216517 E-807
15 3.157800047 E-806
16 -3.393278608 E-805
17 -5.444003874 E-804
18 -4.808428794 E-804
19 2.106242431 E-801
20 -1.177930451 E-800
21 -2.356690583 E-799
22 -3.10781078241 E-798
23 -3.76652961891 E-797
24 -3.06512885635 E-797

Example eigenvalues for dim=24
[4.28673736924 E-11]
[0.00000000296568145370]
[0.0000000957784063100]
[0.00000191766745327]
[0.0000266643844037]
[0.000273351215354]
[0.00214050544928]
[0.0130807983589]
[0.0630992348914]
[0.240819894743]
[0.729062578542]
[1.00000000000]
[1.89149672765]
[5.08315258442]
[15.5889319581]
[54.8943446446]
[221.893591029]
[1035.09334661]
[5636.83816890]
[36538.7788311]
[290981.552989]
[3004492.63267]
[44636646.3387]
[1247092190.35]

Wouldn't say, this is exactly convergence with increasing dimension... ;-)

Gottfried
Hm, I dont know whether it has a deeper meaning its just an observation Smile
(08/28/2009, 08:58 PM)bo198214 Wrote: [ -> ]Hm, I dont know whether it has a deeper meaning its just an observation Smile

Hmm, this means, the determinant of this matrix is 1 for finite dimension - which extends then also for iterates/powers.
That the determinant is 1 for finite dimension results also from the determinants of its factors, the stirling- ind the bpascal-matrix. Both are triangular and have units on the diagonal, so
det(fS2F) = det(P) = 1 and det(B) = det(fS2F * P~) = det(fS2F)*det(P~) = 1 *1.
(08/28/2009, 09:10 PM)Gottfried Wrote: [ -> ]
(08/28/2009, 08:58 PM)bo198214 Wrote: [ -> ]Hm, I dont know whether it has a deeper meaning its just an observation Smile

Hmm, this means, the determinant of this matrix is 1 for finite dimension - which extends then also for iterates/powers.
That the determinant is 1 for finite dimension results also from the determinants of its factors, the stirling- ind the bpascal-matrix. Both are triangular and have units on the diagonal, so
det(fS2F) = det(P) = 1 and det(B) = det(fS2F * P~) = det(fS2F)*det(P~) = 1 *1.
It is also interesting in contrast to the version, which has the powerseries developed at the first complex fixpoint for exp(x), x0 = 0.318131505205 + 1.33723570143*I .
With dim=64 I get -at least for the integer iterates -1,1,2 the expected values to 10 digits accuracy - however, the sum of the logarithms of the eigenvalues should be simply, but consequently,
log(x0^0) + log(x0^1) + log(x0^2) + ...
= log(x0)*(0 + 1 + 2 + 3 + ...) =???= log(x0) * zeta(-1) =-0.0265109587671 - 0.111436308453*I

....
well, better not to ride the horse to death... Wink

Gottfried