Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Eigenvalues of the Carleman matrix of b^x
#1
I don't really know, whether my earlier approach to find a diagonalization for the b^x problem is really a proof, but may be a base for a proof.



Assume the carleman-matrix of infinite size for a base b, 1<b<e^(1/e), using the letters c=log(b), b=t^(1/t) u=log(t)
written as
Bb = (its top-left edge)


to the effect, that


where, as usual, V(x) denotes a vandermonde-vector [1,x,x^2,x^3,...] of a variable parameter x. But differing from my usual notation let's assume V(x) being a rowvector for this sequel to reduce notation-overhead.

Now, if we look for a diagonalisation of Bb this would have the form


(W and W^-1 exchanged from my usual notation elsewhere)

The diagonalization-theorems for finite matrices say then



so each of the rows of W becomes a -multiple of itself by such a transformation, where denotes the r'th eigenvalue contained in the diagonal of D, where r is the row-number beginning at zero.

Then, obviously, if we use a fixpoint t of Bb we have, for instance


and V(t) satisfies the condition to be an eigenvector for the eigenvalue 1. So assume and .

Next one can show (I'll add the proof later) that another vector E_1(t) also satisfies the eigenvector-condition.

such that

so we have W_1 = E_1 and d_1 = u .

It is difficult to get an idea about E_2 by sheer inspection of example data; but it seems, that an extrapolation makes sense: that we may rescale W by dV(1/t) with the effect, that the descriptions of E_0 and E_1 (and hopefully all E_k) reduce to its numeric coefficients independent of t.

So we restate the diagonalization in the following form:





and investigate X instead of W.

Second, the base-equation for a parameter x changes.
Let's call dV(t)*Bb*dV(1/t) = Bb_1
The parameter x has now to be divided by t and the result has to be multiplied by t:






Note, that in the above definition of Bb we have the constant factor c^r in each row, which is now multiplied by t^r due to the premultiplication by dV(t). But c=log(b) = u/t and we have
t^r*c^r = (t*c)^r = u^r
and the row-multiplicator of dV(t)*Bb*dV(1/t) is now dV(u).

Well, let's go back to the previous.
First we have


and we may assume, that X_2, X_3,... are following a simple scheme.
One assumption is, that this could come out as a composition of the pascal-matrix, say


The interesting thing - and may be the base for a final proof - is now, that with this assumption S can be found by an iterative process, if we assume, that d_r = u^r. The iterative process requires an eigensystem-solution for each row in X, but which requires only the results of the previous steps and leads to a triangular solution S (which comes out to be the Ut-matrix, btw.)

Because the latter solution is a) solvable and b) not arbitrary under the assumtion of d_r = u^r I think, that may be a path for the proof. However - even if the solution is unique under this assumtion, one may find other solutions with another assumtion.
Gottfried Helms, Kassel
Reply
#2
Just found a msg in sci.math.reseach of last year where I looked at the trace of the carleman-matrix. This may be another element of a proof.


if where D is diagonal, containing [1,u,u^2,u^3,...]

It must hold (from finite matrices): trace(Bb) = trace(D)

We have:

The structure of Bb is simple; the sum of its diagonal elements are (using





some numerical checks:
Code:
[     u        ,    t         , b            ,  trace(D)    ,  trace(Bb)   , trace(D)-trace(Bb]
[0.500000000000, 1.64872127070, 1.35427374603, 2.00000000000, 2.00000000000, 4.31794217767 E-60]
[0.600000000000, 1.82211880039, 1.38997669617, 2.50000000000, 2.50000000000, 2.412917620 E-86]
[0.700000000000, 2.01375270747, 1.41567962006, 3.33333333333, 3.33333333333, 3.217223494 E-86]
[0.800000000000, 2.22554092849, 1.43256016868, 5.00000000000, 5.00000000000, 6.43444698 E-86]
[0.900000000000, 2.45960311116, 1.44182935647, 10.0000000000, 10.0000000000, 1.608611747 E-85]
Gottfried Helms, Kassel
Reply
#3
Re first part:
Perhaps I just dont understand your explanation but for my taste you work too much with infinite matrices. We already have seen that they may not even have unique inverse, nor a unique diagonalization. We can not carry over the rules of finite matrices to infinite matrices. Also the fixed point choice is not unique, there are infinite many complex fixed points.

Re second part:
The trace is indeed an interesting invariant, here the trace of an infinite matrix is really the limit of the traces of the finite matrices:
.

So what you are saying is (?): If the eigenvalues of the matrices do converge to the powers of the logarithm of the lower fixed point then .

So if has indeed this value then this is a "good sign".

So the question then remains whether


Right?
Lets transform this a bit:


This looks already damn like the power series of the LambertW function which is
.

Hence

And we want to show that .
Ok, we have . Take the derivative:


now is so


Hm this is not quite so where is the error?
Reply
#4
bo198214 Wrote:Re first part:
Perhaps I just dont understand your explanation but for my taste you work too much with infinite matrices. We already have seen that they may not even have unique inverse, nor a unique diagonalization. We can not carry over the rules of finite matrices to infinite matrices. Also the fixed point choice is not unique, there are infinite many complex fixed points.
Well, one more try:


a) (first row r=0 of W of matrix-equation W*Bb = W * dV(u) )
Proposal: for all columns c it holds


Proof:




b) (second row r=1 of W of matrix-equation)
Proposal: for all columns c it holds


Proof:


c) (third and consecutive rows) --- here we had to insert a valid assumtion about the coefficient x_k at x_k*t^k respectively x_c*t^c

Proposal:

x_c are to be determined

--------------------

Quote:Re second part:
...
Hm this is not quite so where is the error?

Nice, I'll help to try to locate it, possibly tomorrow.
Gottfried Helms, Kassel
Reply
#5
Gottfried Wrote:(...)
c) (third and consecutive rows) --- here we had to insert a valid assumtion about the coefficient x_k at x_k*t^k respectively x_c*t^c

Proposal:

x_c are to be determined

--------------------

I've applied the eigensystem-solution. This gives as possible solutions x_k for third row:

Code:
row 2 (third row of W, invariant under transformation by Bb, except scaling by eigenvalue u^2
-------------------------------------------
                0
              u*t
      (3*u-2)*t^2
      (6*u-6)*t^3
    (10*u-12)*t^4
    (15*u-20)*t^5
    (21*u-30)*t^6
    (28*u-42)*t^7
    (36*u-56)*t^8
    (45*u-72)*t^9
   (55*u-90)*t^10
  (66*u-110)*t^11
    ...
all with denominator (u-2)

Begin of decoding:
x0  x1  x2  x3  x4  x5  x6   x7  
--------------------------------------
(0   1   3   6  10  15  21  28  ...) *u
-(0   0   1   3   6  10  15  21  ...) *2  
--------------------------------------
  / (u-2)
=====================================================================

This should be a solution for the fourth row; the vector of entries is invariant except scaling by eigenvalue u^3:
row 3 (fourth row of W):
-------------------------------------------
                                   0
                       (u^3+6*u^2)*t
             (5*u^3+12*u^2-18*u)*t^2
         (13*u^3+15*u^2-60*u+18)*t^3
        (26*u^3+12*u^2-132*u+72)*t^4
              (45*u^3-240*u+180)*t^5
       (71*u^3-24*u^2-390*u+360)*t^6
      (105*u^3-63*u^2-588*u+630)*t^7
    (148*u^3-120*u^2-840*u+1008)*t^8
   (201*u^3-198*u^2-1152*u+1512)*t^9
  (265*u^3-300*u^2-1530*u+2160)*t^10
  (341*u^3-429*u^2-1980*u+2970)*t^11
    ...
all with denominator  (u^3-3*u^2-6*u+18)) = (u-3)*u^2-6)

Begin of decoding:
x0  x1  x2  x3  x4  x5  x6   x7   x8   x9
--------------------------------------------------------------------
(0   1   4  10  20  35  56   84  120  165 ...          )*1*u^3
(0   0   1   3   6  10  15   21   28   36 ...          )*1*u^3

+(0   2   4  5   4   0 - 8 - 21 - 40 - 66 -100 -143 ...)*3*u^2
-(0   0   3 10  22  40  65   98  140  192  255  330 ...)*6*u
+(0   0   0  1   4  10  20   35   56   84  120  165 ...)*18
---------------------------------------------------------------------
   / (u-3)/(u^2-6)
=====================================================================

The meaning of this is, that if we assume a set of eigenvalues [1,u,u^2,u^3,...], then we can determine such invariant vectors for each row of W (and then check numerically or analytically), thus we can construct this part of diagonalization (diagonal, W)
Gottfried Helms, Kassel
Reply
#6
bo198214 Wrote:Lets transform this a bit:


This looks already damn like the power series of the LambertW function which is
.

index-error: we start at n=1

.

Quote:Hence

Then:




rewrite derivative of Lambert-W




rewrite trace
let c= u/e^u




I cannot follow the formal derivative at the moment,

Quote:And we want to show that .
Ok, we have . Take the derivative:


now is so


Hm this is not quite so where is the error?
but the last identity helps...


Then it would be interesting, how this looks:
a) for higher powers of the carleman-matrix
b) for different fixpoints/different branches of W
Gottfried Helms, Kassel
Reply
#7
Gottfried thanks for making this derivation work!

Quote:I cannot follow the formal derivative at the moment,

Quote:Ok, we have . Take the derivative:

this is just applied multiplication rule together with chain rule .
Reply
#8
I just found that the trace is the th coefficient of the characteristic polynomial.
As we want to show that the solutions of the sequence of characteristic polynomials converge to , maybe it suffices already to show that the coefficients of the sequence of characteristic polynomials converge to the coefficients of .

And indeed the th coefficient of is , which is in the limit.
So we have already one element of a proof, i.e.

the difference of the th coefficient of the characteristic polynomial of and the th coefficient of converges to 0 for .

Now we need a similar result for all the other coefficients...
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  Tommy's matrix method for superlogarithm. tommy1729 0 1,442 05/07/2016, 12:28 PM
Last Post: tommy1729
  [2015] New zeration and matrix log ? tommy1729 1 2,829 03/24/2015, 07:07 AM
Last Post: marraco
  Regular iteration using matrix-Jordan-form Gottfried 7 7,759 09/29/2014, 11:39 PM
Last Post: Gottfried
  Q: Exponentiation of a carleman-matrix Gottfried 0 2,699 11/19/2012, 10:18 AM
Last Post: Gottfried
  A support for Andy's (P.Walker's) slog-matrix-method Gottfried 0 2,265 11/14/2011, 04:01 AM
Last Post: Gottfried
  "Natural boundary", regular tetration, and Abel matrix mike3 9 13,640 06/24/2010, 07:19 AM
Last Post: Gottfried
  sum of log of eigenvalues of Carleman matrix bo198214 4 5,971 08/28/2009, 09:34 PM
Last Post: Gottfried
  spectrum of Carleman matrix bo198214 3 4,097 02/23/2009, 03:52 AM
Last Post: Gottfried
  Matrix Operator Method Gottfried 38 39,800 09/26/2008, 09:56 AM
Last Post: Gottfried
  matrix function like iteration without power series expansion bo198214 15 20,145 07/14/2008, 09:55 PM
Last Post: bo198214



Users browsing this thread: 1 Guest(s)