I don't really know, whether my earlier approach to find a diagonalization for the b^x problem is really a proof, but may be a base for a proof.

Assume the carleman-matrix of infinite size for a base b, 1<b<e^(1/e), using the letters c=log(b), b=t^(1/t) u=log(t)

written as

Bb = (its top-left edge)

to the effect, that

where, as usual, V(x) denotes a vandermonde-vector [1,x,x^2,x^3,...] of a variable parameter x. But differing from my usual notation let's assume V(x) being a rowvector for this sequel to reduce notation-overhead.

Now, if we look for a diagonalisation of Bb this would have the form

(W and W^-1 exchanged from my usual notation elsewhere)

The diagonalization-theorems for finite matrices say then

so each of the rows of W becomes a -multiple of itself by such a transformation, where denotes the r'th eigenvalue contained in the diagonal of D, where r is the row-number beginning at zero.

Then, obviously, if we use a fixpoint t of Bb we have, for instance

and V(t) satisfies the condition to be an eigenvector for the eigenvalue 1. So assume and .

Next one can show (I'll add the proof later) that another vector E_1(t) also satisfies the eigenvector-condition.

such that

so we have W_1 = E_1 and d_1 = u .

It is difficult to get an idea about E_2 by sheer inspection of example data; but it seems, that an extrapolation makes sense: that we may rescale W by dV(1/t) with the effect, that the descriptions of E_0 and E_1 (and hopefully all E_k) reduce to its numeric coefficients independent of t.

So we restate the diagonalization in the following form:

and investigate X instead of W.

Second, the base-equation for a parameter x changes.

Let's call dV(t)*Bb*dV(1/t) = Bb_1

The parameter x has now to be divided by t and the result has to be multiplied by t:

Note, that in the above definition of Bb we have the constant factor c^r in each row, which is now multiplied by t^r due to the premultiplication by dV(t). But c=log(b) = u/t and we have

t^r*c^r = (t*c)^r = u^r

and the row-multiplicator of dV(t)*Bb*dV(1/t) is now dV(u).

Well, let's go back to the previous.

First we have

and we may assume, that X_2, X_3,... are following a simple scheme.

One assumption is, that this could come out as a composition of the pascal-matrix, say

The interesting thing - and may be the base for a final proof - is now, that with this assumption S can be found by an iterative process, if we assume, that d_r = u^r. The iterative process requires an eigensystem-solution for each row in X, but which requires only the results of the previous steps and leads to a triangular solution S (which comes out to be the Ut-matrix, btw.)

Because the latter solution is a) solvable and b) not arbitrary under the assumtion of d_r = u^r I think, that may be a path for the proof. However - even if the solution is unique under this assumtion, one may find other solutions with another assumtion.

Assume the carleman-matrix of infinite size for a base b, 1<b<e^(1/e), using the letters c=log(b), b=t^(1/t) u=log(t)

written as

Bb = (its top-left edge)

to the effect, that

where, as usual, V(x) denotes a vandermonde-vector [1,x,x^2,x^3,...] of a variable parameter x. But differing from my usual notation let's assume V(x) being a rowvector for this sequel to reduce notation-overhead.

Now, if we look for a diagonalisation of Bb this would have the form

(W and W^-1 exchanged from my usual notation elsewhere)

The diagonalization-theorems for finite matrices say then

so each of the rows of W becomes a -multiple of itself by such a transformation, where denotes the r'th eigenvalue contained in the diagonal of D, where r is the row-number beginning at zero.

Then, obviously, if we use a fixpoint t of Bb we have, for instance

and V(t) satisfies the condition to be an eigenvector for the eigenvalue 1. So assume and .

Next one can show (I'll add the proof later) that another vector E_1(t) also satisfies the eigenvector-condition.

such that

so we have W_1 = E_1 and d_1 = u .

It is difficult to get an idea about E_2 by sheer inspection of example data; but it seems, that an extrapolation makes sense: that we may rescale W by dV(1/t) with the effect, that the descriptions of E_0 and E_1 (and hopefully all E_k) reduce to its numeric coefficients independent of t.

So we restate the diagonalization in the following form:

and investigate X instead of W.

Second, the base-equation for a parameter x changes.

Let's call dV(t)*Bb*dV(1/t) = Bb_1

The parameter x has now to be divided by t and the result has to be multiplied by t:

Note, that in the above definition of Bb we have the constant factor c^r in each row, which is now multiplied by t^r due to the premultiplication by dV(t). But c=log(b) = u/t and we have

t^r*c^r = (t*c)^r = u^r

and the row-multiplicator of dV(t)*Bb*dV(1/t) is now dV(u).

Well, let's go back to the previous.

First we have

and we may assume, that X_2, X_3,... are following a simple scheme.

One assumption is, that this could come out as a composition of the pascal-matrix, say

The interesting thing - and may be the base for a final proof - is now, that with this assumption S can be found by an iterative process, if we assume, that d_r = u^r. The iterative process requires an eigensystem-solution for each row in X, but which requires only the results of the previous steps and leads to a triangular solution S (which comes out to be the Ut-matrix, btw.)

Because the latter solution is a) solvable and b) not arbitrary under the assumtion of d_r = u^r I think, that may be a path for the proof. However - even if the solution is unique under this assumtion, one may find other solutions with another assumtion.

Gottfried Helms, Kassel