spectrum of Carleman matrix
#1
Hey Gottfried,

did you ever thought about the spectrum of the Carleman matrix of \( \exp_b \)?
For finite matrices the spectrum is just the set of eigenvalues. However for an infinite matrix or more generally for a linear operator \( A \) on a Banach space the spectrum is defined as all values \( \lambda \) such that \( A-\lambda I \) is not invertible.

We saw that the eigenvalues of the truncated Carleman matrices of \( \exp \) somehow diverge. So it would be very interesting to know the spectrum of the infinite matrix. As this also has consequences on taking non-integer powers of those matrices.

Or is the spectrum just complete \( \mathbb{C} \) because \( A \) is not invertible itself?
#2
bo198214 Wrote:Hey Gottfried,

did you ever thought about the spectrum of the Carleman matrix of \( \exp_b \)?

... oh so many times...Rolleyes

Quote:For finite matrices the spectrum is just the set of eigenvalues. However for an infinite matrix or more generally for a linear operator \( A \) on a Banach space the spectrum is defined as all values \( \lambda \) such that \( A-\lambda I \) is not invertible.

We saw that the eigenvalues of the truncated Carleman matrices of \( \exp \) somehow diverge. So it would be very interesting to know the spectrum of the infinite matrix. As this also has consequences on taking non-integer powers of those matrices.

Or is the spectrum just complete \( \mathbb{C} \) because \( A \) is not invertible itself?

Well, I've had at most 100 explanative pages about infinite matrices in my hands, so I cannot cite an authoritative statement about the cardinality, as well as about the characteristics of the non-invertibility.

But what I decided to use as hypotheses:
1)
I didn't discuss about a single eigenvalue or some arbitrarily formed set yet. I used always the restriction, that a) an infinite set of eigenvalues exists, which has also a unique structure: it is defined by the consecutive powers of one base-parameter.
and that b) there are *infinitely many* such sets of eigenvalues, according to the infinitude of fixpoints: each fixpoint defines one of such sets.

While one single eigenvalue defines "\( A-\lambda I \) is non-invertible", the definition of such a set is more informative, since with this we can say

\( A * W = W * dV(\lambda) \)

where dV(x) denotes, as usual, a diagonalmatrix of consecutive powers of x - and each of these powers is an eigenvalue so that
\( A-\lambda^0 I \), \( A-\lambda I \), \( A-\lambda^2 I \), \( A-\lambda^3 I \),... are all non-invertible.
The complete set of eigenvalues I "know of" is thus the set of all fixpoints and all of their (nonnegative) integer powers, which is then N x N.

And W is an infinite set of invariant column-vectors according to the selection of lambda and its power.

There may be other eigenvalues, but I didn't think about this yet.
--------------------------------------------
2)
The idea, that the spectrum is complete \( \mathbb{C} \) because of non-invertibility seems to be a non sequitur.

I see the noninvertibility of A for one single reason. First: A is decomposbale into two well known triangular matrices Binomial P and Stirling-kind2 S2 (factorially scaled).
Both factors P and S2 are invertible, so

\( A = S2 * P\sim \)

and formally hte inverse is possible

\( A^{-1} = P^{-1}\sim * S2^{-1} = P^{-1}\sim * S1 \)

where S1 is the matrix of Stirlingnumbers 1st kind, also factorially scaled.
The reason why A is not invertible, is that in \( P^{-1}\sim * S1 \) the dotproduct first row by second column is infinite and exactly gives log(0).

If by some measure this multiplication can be avoided in a more complex matrix-operation, then the whole formula behaves like if A^-1 would exist (we discussed this in the context of fixpoint-shift)


Gottfried
Gottfried Helms, Kassel
#3
Gottfried Wrote:I see the noninvertibility of A for one single reason. First: A is decomposbale into two well known triangular matrices Binomial P and Stirling-kind2 S2 (factorially scaled).
Both factors P and S2 are invertible, so

\( A = S2 * P\sim \)

and formally hte inverse is possible

\( A^{-1} = P^{-1}\sim * S2^{-1} = P^{-1}\sim * S1 \)

where S1 is the matrix of Stirlingnumbers 1st kind, also factorially scaled.
The reason why A is not invertible, is that in \( P^{-1}\sim * S1 \) the dotproduct first row by second column is infinite and exactly gives zeta(1).

I think I somewhere read that for infinite matrices not even \( A(BC)=(AB)C \) is valid, perhaps exactly because one limit does not exist.
#4
bo198214 Wrote:I think I somewhere read that for infinite matrices not even \( A(BC)=(AB)C \) is valid, perhaps exactly because one limit does not exist.

Sure; I think that is the basic reason and I would like to see such a statement explicite anywhere, with a specific description of the range of applicability/non-applicability. For instance, in all my experiences with that formulae we can even use (AB)C = A(BC) in some instances: if the implicite dot-products are cesaro or Euler-summable; that means, if the infinite series are for instance alternating dirichlet or geometric series. There occured also some additional restrictions, but I don't have them systematically.

Moreover, even if (AB)C =/=A(BC) in many cases, this does not mean that we cannot deal with the matrix-concept in infinite dimension at all meaningfully. If we have triangular matrices, all either rowfinite or columnfinite, then we can have a very good deal, and no implicte infinite dot-product occurs. One example which I like much (it was the first I've seen and translated to the matrix-formula), is that of Helmut Hasse, where he used the Worpitzky-formula and proved the connection between bernoulii-numbers and zeta-values and introduced a special summing-procedure for the zeta-function, which in principle exploited that composition of S2*P~. which I mentioned in my previous post. A whole family of procedures in the theory of divergent summation deals automatically with infinite matrices, these are that of the Haussdorf-means. They unite Cesaro and Euler-summation by the definition of P * D * P^-1 (where the D is diagonal); and D's characteristic determines then the type of divergent summation.

Possibly the concise compilation of conditions and requirements for the algebra with infinite matrices is actually the job, one competent author should involve himself with...

Gottfried
Gottfried Helms, Kassel


Possibly Related Threads…
Thread Author Replies Views Last Post
  Matrix question for Gottfried Daniel 6 2,221 12/10/2022, 09:33 PM
Last Post: MphLee
  A support for Andy's (P.Walker's) slog-matrix-method Gottfried 4 7,575 03/08/2021, 07:13 PM
Last Post: JmsNxn
  New Quantum Algorithms (Carleman linearization) Finally Crack Nonlinear Equations Daniel 2 2,929 01/10/2021, 12:33 AM
Last Post: marraco
  Tommy's matrix method for superlogarithm. tommy1729 0 4,361 05/07/2016, 12:28 PM
Last Post: tommy1729
  Regular iteration using matrix-Jordan-form Gottfried 7 18,566 09/29/2014, 11:39 PM
Last Post: Gottfried
  Q: Exponentiation of a carleman-matrix Gottfried 0 5,124 11/19/2012, 10:18 AM
Last Post: Gottfried
  "Natural boundary", regular tetration, and Abel matrix mike3 9 28,388 06/24/2010, 07:19 AM
Last Post: Gottfried
  sum of log of eigenvalues of Carleman matrix bo198214 4 13,354 08/28/2009, 09:34 PM
Last Post: Gottfried
  Matrix Operator Method Gottfried 38 89,117 09/26/2008, 09:56 AM
Last Post: Gottfried
  matrix function like iteration without power series expansion bo198214 15 42,993 07/14/2008, 09:55 PM
Last Post: bo198214



Users browsing this thread: 1 Guest(s)