Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Tetra-series
#1
I have another strange, but again very basic result for the alternating series of powertowers of increasing height (I call it Tetra-series, see also my first conjecture at alternating tetra-series )

Assume a base "b", and then the alternating series

Code:
.    
  Sb(x) = x - b^x + b^b^x - b^b^b^x +... - ...

and for a single term, with h for the integer height (which may also be negative)
Code:
.
  Tb(x,h) = b^b^b^...^x     \\ b occurs h-times

which -if h is negative- actually means (where lb(x) = log(x)/log(b) )
Code:
.
  Tb(x,-h) = lb(lb(...(lb(x))...)   \\ lb occurs h-times

-------------------------------------------------------

My first result was, that these series have "small" values and can be summed even if b>e^(1/e) (which is not possible with conventional summation methods). For the usual convergent case e^(-e)<b<e^(1/e) the results can be checked by Euler-summation and they agree perfectly with the results obtained by my matrix-method.(see image below)


Code:
matrix-notation
Sb(x) = (V(x)~ * (I - Bb + Bb^2 - Bb^3 + ... - ...)  )[,1]
       = (V(x)~ * (I + Bb)^-1 )   [,1]
       =  V(x)~ * Mb[,1]                \\ (at least) for all b>1
       = sum r=0..inf  x^r * mb[r,1]  

serial notation
       = sum h=0..inf  (-1)^h* Tb(x,h)  \\ only possible for e^(-e) < b < e^(1/e)
                                        \\ Euler-summation required


-------------------------------------------------------


Now if I extend the series Sb(x) to the left, using lb(x) = log(x)/log(b) for log(x) to base b, then define

Code:
.
   Rb(x) = x - lb(x) + lb(lb(x) - lb(lb(lb(x))) +... - ...

This may be computed by the analoguous formula above to that for Mb from the inverse of Bb:
Code:
.
   Lb = (I + Bb^-1)^-1

I get for the sum of both by my matrix-method
Code:
.
  Sb(x) + Rb(x) = V(x)~ *Mb[,1] + V(x)~ * Lb[,1]
                = V(x)~ * (Mb + Lb)[,1]
                = V(x)~ *    I [,1]
                = V(x)~ *   [0,1,0,0,...]~  
                = x

  Sb(x) + Rb(x) = x
or, and this looks even more strange (but even more basic)

Code:
.
  0 = ... lb(lb(x)) - lb(x) + x - b^x + b^b^x - ... + ...

x cannot assume the value 1, 0 or any integral height of the powertower b^b^b... since at a certain position we have then a term lb(0), which introduces a singularity.


Using the Tb()-notation for shortness, then the result is



and is a very interesting one for any tetration-dedicated...

Gottfried
-------------------------------------------------------

An older plot; I used AS(s) with x=1,s=b for Sb(x) there.
(a bigger one AS

   
Gottfried Helms, Kassel
Reply
#2
Have you tried computing or plotting or yet? I would do this but I don't have any code yet for AS(x), and I'm lazy.

Andrew Robbins
Reply
#3
Gottfried Wrote:I get for the sum of both by my matrix-method
Code:
.
  Sb(x) + Rb(x) = V(x)~ *Mb[,1] + V(x)~ * Lb[,1]
                = V(x)~ * (Mb + Lb)[,1]
                = V(x)~ *    I [,1]
                = V(x)~ *   [0,1,0,0,...]~  
                = x

  Sb(x) + Rb(x) = x

This is what I have the most trouble understanding. First what is your [,1] notation mean? I understand "~" is transpose, and that Bb is the Bell matrix . Second, what I can't see, or is not obvious to me at least, is why:



Is there any reason why this should be so? Can this be proven?

Wait, I just implemented it in Mathematica, and you're right! (as right as can be without a complete proof). Cool! This may just be the single most bizarre theorem in the theory of tetration and/or divergent series.

Andrew Robbins
Reply
#4
andydude Wrote:
Gottfried Wrote:I get for the sum of both by my matrix-method
Code:
.
  Sb(x) + Rb(x) = V(x)~ *Mb[,1] + V(x)~ * Lb[,1]
                = V(x)~ * (Mb + Lb)[,1]
                = V(x)~ *    I [,1]
                = V(x)~ *   [0,1,0,0,...]~  
                = x

  Sb(x) + Rb(x) = x

This is what I have the most trouble understanding. First what is your [,1] notation mean? I understand "~" is transpose, and that Bb is the Bell matrix . Second, what I can't see, or is not obvious to me at least, is why:



Is there any reason why this should be so? Can this be proven?

Wait, I just implemented it in Mathematica, and you're right! (as right as can be without a complete proof). Cool! This may just be the single most bizarre theorem in the theory of tetration and/or divergent series.

Andrew Robbins

Hi Andrew -
first: I appreciate your excitement! Yepp! :-)

second:
(The notation B[,1] refers to the second column of a matrix B)

Yes, I just posed the question, whether (I+B)^-1 + (I+B^-1)^-1 = I in the sci.math- newsgroup. But the proof for finite dimension is simple.

You need only factor out B or B^-1 in one of the expressions.
Say C = B^-1 for brevity
Code:
.
   (I + B)^-1 + (I + C)^-1  
= (I + B)^-1 + (CB + C)^-1
= (I + B)^-1 + (C(B + I))^-1
= (I + B)^-1 + (B + I)^-1*C^-1
= (I + B)^-1 + (B + I)^-1*B
= (I + B)^-1 *(I + B)
= I

As long as we deal with truncations of the infinite B and these are well conditioned we can see this identity in Pari or Mathematica with good approximation.

However, B^-1 in the infinite case is usually not defined, since it implies the inversion of the vandermonde matrix, which is not possible.

On the other hand, for infinite lower *triangular* matrices a reciprocal is defined.


The good news is now, that B can be factored into two triangular matrices, like

B = S2 * P~

where P is the pascal-matrix, S2 contains the stirling-numbers of 2'nd kind, similarity-scaled by factorials

S2 = dF^-1 * Stirling2 * dF
(dF is the diagonal of factorials diag(0!,1!,2!,...) )

Then, formally, B^-1 can be written

B^-1 = P~^-1 *S2^-1 = P~^-1 * S1
(where S1 contains the stirling-numbers of 1'st kind, analoguously factorial rescaled, and S1 = S2^-1 even in the infinite case)

B^-1 cannot be computed explicitely due to divergent sums for all entries (rows of P~^-1 by columns of S1), and thus is not defined.

However, in the above formulae for finite matrices we may rewrite C in terms of its factors P and S1, and deal with that decomposition-factors only and arrive at the desired result (I've not done this yet, pure lazyness...)

third:
This suggests immediately new proofs for some subjects I've already dealt with, namely all functions, which are expressed by matrix-operators and infinite series of these matrix-operators.
For instance, I derived the ETA-matrix (containing the values for the alternating zeta-function at negative exponents) from the matrix-expression
Code:
.
ETA = (P^0 - P^1 + P^2 ....)
     = (I + P)^-1
If I add the similar expression for the inverses of P I arrive at a new proof for the fact, that each eta(2k) must equal 0 for k>0.

Yes- this is a very beautiful and far-reaching fact, I think ...

Gottfried
Gottfried Helms, Kassel
Reply
#5
hej gottfried,

iI would sincerely like to understand more about these matrixes, i have a feeling its important, but I could not find in You texts what is I-i suppose it is identityu matrix, but how does it look like?

Best regards,

Ivars
Reply
#6
Ivars Wrote:hej gottfried,

iI would sincerely like to understand more about these matrixes, i have a feeling its important, but I could not find in You texts what is I-i suppose it is identityu matrix, but how does it look like?

Best regards,

Ivars

Hi Ivars -
just the matrix containing 1 on its diagonal. So multiplying by it doesn't change a matrix, like multiplication by 1 does not change the multiplicand.

Gottfried
Gottfried Helms, Kassel
Reply
#7
Gottfried Wrote:Yes- this is a very beautiful and far-reaching fact, I think ...
I've just received an answer in the newsgroup sci.math by Prof G.A.Edgar who states a numerical discrepancy between my matrix-based conjecture and termwise evaluation of the series.

I cannot resolve the problem completely - the problem doesn't affect the Mb-matrix related conjectures (also of earlier date) but the problem of representation of the alternating series of powers of the reciprocal of Bb by the analoguous expression. I don't have an idea currently, how to cure this and how to correctly adapt my conjecture. So - sigh - I have to retract it for the moment.

[update] I should mention, that this concerns only the Bb-matrix, which is not simply invertible. The application of the idea of the formula to other matrix-operators may be still valid; especially for triangular matrices like P the observation is still valid; I assume, it is also valid for the U-iteration x->exp(x)-1 , since the matrix-operator is the triangular Stirling-matrix. I'll check that today [/update]

[update2] The problem occurs also with the U-iteration and its series of negative heights. Looks like the reciprocal matrix needs some more consideration [/update2]

Gottfried

P.s. I'll add the conversation here later as an attachment.

[update3] A graph which shows perfect match between serial and matrix-method-summation for the Tb-series; and periodic differences between the Rb-series [/update3]
   
Gottfried Helms, Kassel
Reply
#8
Ok, so the identity:

holds true for all matrices, not just the Bell matrix of exponentials.

Good to know.

Andrew Robbins
Reply
#9
andydude Wrote:Ok, so the identity:

holds true for all matrices, not just the Bell matrix of exponentials.

Good to know.

Andrew Robbins

Regularly invertible matrices (for instance of finite size); and I think, that even infinite matrices can be included, if they are triangular (row- or column-finite) or, if not triangular, at least if some other condition holds (on their eigenvalues or the like). I'll have to perform some more tests...
Gottfried Helms, Kassel
Reply
#10
Gottfried Wrote:You need only factor out B or B^-1 in one of the expressions.
Say C = B^-1 for brevity
Code:
.
   (I + B)^-1 + (I + C)^-1  
= (I + B)^-1 + (CB + C)^-1
= (I + B)^-1 + (C(B + I))^-1
= (I + B)^-1 + (B + I)^-1*C^-1
= (I + B)^-1 + (B + I)^-1*B
= (I + B)^-1 *(I + B)
= I

This completes the proof in my view. Good job.

Andrew Robbins
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
Question Taylor series of i[x] Xorter 12 10,288 02/20/2018, 09:55 PM
Last Post: Xorter
  Taylor series of cheta Xorter 13 10,972 08/28/2016, 08:52 PM
Last Post: sheldonison
  Derivative of E tetra x Forehead 7 8,670 12/25/2015, 03:59 AM
Last Post: andydude
  [integral] How to integrate a fourier series ? tommy1729 1 2,310 05/04/2014, 03:19 PM
Last Post: tommy1729
  Iteration series: Series of powertowers - "T- geometric series" Gottfried 10 15,673 02/04/2012, 05:02 AM
Last Post: Kouznetsov
  Iteration series: Different fixpoints and iteration series (of an example polynomial) Gottfried 0 2,675 09/04/2011, 05:59 AM
Last Post: Gottfried
  What is the convergence radius of this power series? JmsNxn 9 14,866 07/04/2011, 09:08 PM
Last Post: JmsNxn
  An alternate power series representation for ln(x) JmsNxn 7 12,925 05/09/2011, 01:02 AM
Last Post: JmsNxn
  weird series expansion tommy1729 2 4,029 07/05/2010, 07:59 PM
Last Post: tommy1729
  Something interesting about Taylor series Ztolk 3 6,076 06/29/2010, 06:32 AM
Last Post: bo198214



Users browsing this thread: 1 Guest(s)