andydude Wrote:Gottfried Wrote:I get for the sum of both by my matrix-method

Code:`.`

Sb(x) + Rb(x) = V(x)~ *Mb[,1] + V(x)~ * Lb[,1]

= V(x)~ * (Mb + Lb)[,1]

= V(x)~ * I [,1]

= V(x)~ * [0,1,0,0,...]~

= x

Sb(x) + Rb(x) = x

This is what I have the most trouble understanding. First what is your [,1] notation mean? I understand "~" is transpose, and that Bb is the Bell matrix . Second, what I can't see, or is not obvious to me at least, is why:

Is there any reason why this should be so? Can this be proven?

Wait, I just implemented it in Mathematica, and you're right! (as right as can be without a complete proof). Cool! This may just be the single most bizarre theorem in the theory of tetration and/or divergent series.

Andrew Robbins

Hi Andrew -

first: I appreciate your excitement! Yepp! :-)

second:

(The notation B[,1] refers to the second column of a matrix B)

Yes, I just posed the question, whether (I+B)^-1 + (I+B^-1)^-1 = I in the sci.math- newsgroup. But the proof for finite dimension is simple.

You need only factor out B or B^-1 in one of the expressions.

Say C = B^-1 for brevity

Code:

`.`

(I + B)^-1 + (I + C)^-1

= (I + B)^-1 + (CB + C)^-1

= (I + B)^-1 + (C(B + I))^-1

= (I + B)^-1 + (B + I)^-1*C^-1

= (I + B)^-1 + (B + I)^-1*B

= (I + B)^-1 *(I + B)

= I

As long as we deal with truncations of the infinite B and these are well conditioned we can see this identity in Pari or Mathematica with good approximation.

However, B^-1 in the infinite case is usually not defined, since it implies the inversion of the vandermonde matrix, which is not possible.

On the other hand, for infinite lower *triangular* matrices a reciprocal is defined.

The good news is now, that B can be factored into two triangular matrices, like

B = S2 * P~

where P is the pascal-matrix, S2 contains the stirling-numbers of 2'nd kind, similarity-scaled by factorials

S2 = dF^-1 * Stirling2 * dF

(dF is the diagonal of factorials diag(0!,1!,2!,...) )

Then, formally, B^-1 can be written

B^-1 = P~^-1 *S2^-1 = P~^-1 * S1

(where S1 contains the stirling-numbers of 1'st kind, analoguously factorial rescaled, and S1 = S2^-1 even in the infinite case)

B^-1 cannot be computed explicitely due to divergent sums for all entries (rows of P~^-1 by columns of S1), and thus is not defined.

However, in the above formulae for finite matrices we may rewrite C in terms of its factors P and S1, and deal with that decomposition-factors only and arrive at the desired result (I've not done this yet, pure lazyness...)

third:

This suggests immediately new proofs for some subjects I've already dealt with, namely all functions, which are expressed by matrix-operators and infinite series of these matrix-operators.

For instance, I derived the ETA-matrix (containing the values for the alternating zeta-function at negative exponents) from the matrix-expression

Code:

`.`

ETA = (P^0 - P^1 + P^2 ....)

= (I + P)^-1

Yes- this is a very beautiful and far-reaching fact, I think ...

Gottfried

Gottfried Helms, Kassel