12/28/2009, 05:21 PM

There is one question open for me with this computation, maybe it is dealt elsewhere.

(In my matrix-notation) the coefficients for the slog-function to some base b are taken from the SLOGb-vector according to the idea

(I - Bb)*SLOGb = [0,1,0,0,...]

where I is the identity operator and Bb the operator which performs x->b^x

Because the matrix (I - Bb) is not invertible, Andrew's proposal is to remove the empty first column - and the last(!) row to make it invertible, let's call this "corrected(I - Bb)"

Then the coefficients for the slog-powerseries are taken from SLOGb, from a certain sufficiently approximate finite-dimension solution for

SLOGb = (corrected(I - Bb))^-1*[0,1,0,0,...]

and because the coefficients stabilize for higher dimension, the finite SLOGb is taken as a meaningful and also valid approximate to the true (infinite dimensional) SLOGb .

Btw, this approach resembles the problem of the iteration series for powertowers in a nice way: (I - Bb)^-1 would be a representation for I+Bb + BB^2 + BB^3 + ... which could then be used for the iteration-series of h^^b whith increasing heights h. Obviously such a discussion needed some more consideration because we deal with a nasty divergent series here, so let's leave this detail here.

The detail I want to point out is the following.

Consider the coefficients in the SLOGb vector. If we use a "nice" base, say b=sqrt(2), then for dimension=n the coefficients at k=0..n-1 decrease when k approaches n-2, but finally, at k=n-1, one relatively big coefficient follows, which supplies then the needed value for a good approximation of the order-n-polynomial for the slog - a suspicious effect!

This can also be seen with the partial sums; for the slog_b(b)-slog_b(1) we should get partial sums which approach 1. Here I document the deviation of the partial sums from the final value 1 at the last three terms of the n'th-order slog_b-polynomial

(For crosscheck see Pari/GP excerpt at end of msg)

Examples, always the ("partial sums" - 1) up to terms at k=n-3, k=n-2,k=n-1 are given, for some dimension n

While we see generally nice convergence with increasing dimension, there is a "step"-effect at the last partial sum (which also reflects an unusual relatively big last term)

Looking at some more of the last coefficients with dim n=64 we see the following

where we nearly get convergence to an error-result (of about 3e-10), which stabilizes for many terms and is only corrected by a jump due to the very last coefficient.

What does this mean if dimension n->infinity: then, somehow, the correction term "is never reached" ?

Well, the deviation of the partial sums from 1 decreases too, so in a rigorous view we may find out, that this effect can indeed be neglected.

But I'd say, that this makes also a qualitative difference for the finite-dimension-based approximations for the superlog/iteration-height by the other known methods for tetration and its inverse.

What do you think?

Gottfried

(In my matrix-notation) the coefficients for the slog-function to some base b are taken from the SLOGb-vector according to the idea

(I - Bb)*SLOGb = [0,1,0,0,...]

where I is the identity operator and Bb the operator which performs x->b^x

Because the matrix (I - Bb) is not invertible, Andrew's proposal is to remove the empty first column - and the last(!) row to make it invertible, let's call this "corrected(I - Bb)"

Then the coefficients for the slog-powerseries are taken from SLOGb, from a certain sufficiently approximate finite-dimension solution for

SLOGb = (corrected(I - Bb))^-1*[0,1,0,0,...]

and because the coefficients stabilize for higher dimension, the finite SLOGb is taken as a meaningful and also valid approximate to the true (infinite dimensional) SLOGb .

Btw, this approach resembles the problem of the iteration series for powertowers in a nice way: (I - Bb)^-1 would be a representation for I+Bb + BB^2 + BB^3 + ... which could then be used for the iteration-series of h^^b whith increasing heights h. Obviously such a discussion needed some more consideration because we deal with a nasty divergent series here, so let's leave this detail here.

The detail I want to point out is the following.

Consider the coefficients in the SLOGb vector. If we use a "nice" base, say b=sqrt(2), then for dimension=n the coefficients at k=0..n-1 decrease when k approaches n-2, but finally, at k=n-1, one relatively big coefficient follows, which supplies then the needed value for a good approximation of the order-n-polynomial for the slog - a suspicious effect!

This can also be seen with the partial sums; for the slog_b(b)-slog_b(1) we should get partial sums which approach 1. Here I document the deviation of the partial sums from the final value 1 at the last three terms of the n'th-order slog_b-polynomial

(For crosscheck see Pari/GP excerpt at end of msg)

Examples, always the ("partial sums" - 1) up to terms at k=n-3, k=n-2,k=n-1 are given, for some dimension n

Code:

`dim n=4`

...

-0.762957567623

-0.558724904310

-0.150078240781

dim n=8

...

-0.153309829172

-0.120439792559

-0.00882912480664

dim n=16

...

-0.00696424577339

-0.00629653092984

-0.0000322687018600

dim n=32

...

-0.0000228720888610

-0.0000223192966457

-0.000000000473007074189

dim n=64

...

-0.000000000331231525320

-0.000000000330433387110

-0.000000000000000000108

While we see generally nice convergence with increasing dimension, there is a "step"-effect at the last partial sum (which also reflects an unusual relatively big last term)

Looking at some more of the last coefficients with dim n=64 we see the following

Code:

`...`

-0.000000000626200198250

-0.000000000492336933075

-0.000000000417440371765

-0.000000000376261655863

-0.000000000354008626669

-0.000000000342186109314

-0.000000000336009690814

-0.000000000332835946403

-0.000000000331231525320

-0.000000000330433387110

- - - - - - - - - - - - - - - -

-0.000000000000000000108 (= -1.08608090167E-19)

What does this mean if dimension n->infinity: then, somehow, the correction term "is never reached" ?

Well, the deviation of the partial sums from 1 decreases too, so in a rigorous view we may find out, that this effect can indeed be neglected.

But I'd say, that this makes also a qualitative difference for the finite-dimension-based approximations for the superlog/iteration-height by the other known methods for tetration and its inverse.

What do you think?

Gottfried

Code:

`b = sqrt(2)`

N=64

\\ (...) computation of Bb

tmp = Bb-dV(1.0) ;

corrected = VE(tmp,N-1,1-N); \\ keep first n-1 rows and last n-1 columns

\\ computation for some dimension n<=N

n=64;

tmp=VE(corrected,n-1)^-1; \\ inverse of the dim-top/left segment of "corrected"

SLOGb = vectorv(n,r,if(r==1,-1,tmp[r-1,1])) \\ shift resulting coefficients; also set "-1" in SLOGb[1]

partsums = VE(DR,dim)*dV(b,dim)*SLOGb \\ DR provides partial summing

disp = partsums - V(1,dim) \\ partial sums - 1 : but document the last three entries for msg