Thread Rating:
  • 2 Vote(s) - 3 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Andrew Robbins' Tetration Extension
#31
There is one question open for me with this computation, maybe it is dealt elsewhere.
(In my matrix-notation) the coefficients for the slog-function to some base b are taken from the SLOGb-vector according to the idea

(I - Bb)*SLOGb = [0,1,0,0,...]

where I is the identity operator and Bb the operator which performs x->b^x

Because the matrix (I - Bb) is not invertible, Andrew's proposal is to remove the empty first column - and the last(!) row to make it invertible, let's call this "corrected(I - Bb)"

Then the coefficients for the slog-powerseries are taken from SLOGb, from a certain sufficiently approximate finite-dimension solution for

SLOGb = (corrected(I - Bb))^-1*[0,1,0,0,...]

and because the coefficients stabilize for higher dimension, the finite SLOGb is taken as a meaningful and also valid approximate to the true (infinite dimensional) SLOGb .

Btw, this approach resembles the problem of the iteration series for powertowers in a nice way: (I - Bb)^-1 would be a representation for I+Bb + BB^2 + BB^3 + ... which could then be used for the iteration-series of h^^b whith increasing heights h. Obviously such a discussion needed some more consideration because we deal with a nasty divergent series here, so let's leave this detail here.


The detail I want to point out is the following.

Consider the coefficients in the SLOGb vector. If we use a "nice" base, say b=sqrt(2), then for dimension=n the coefficients at k=0..n-1 decrease when k approaches n-2, but finally, at k=n-1, one relatively big coefficient follows, which supplies then the needed value for a good approximation of the order-n-polynomial for the slog - a suspicious effect!

This can also be seen with the partial sums; for the slog_b(b)-slog_b(1) we should get partial sums which approach 1. Here I document the deviation of the partial sums from the final value 1 at the last three terms of the n'th-order slog_b-polynomial

(For crosscheck see Pari/GP excerpt at end of msg)

Examples, always the ("partial sums" - 1) up to terms at k=n-3, k=n-2,k=n-1 are given, for some dimension n
Code:
dim n=4
  ...
    -0.762957567623
    -0.558724904310
    -0.150078240781
dim n=8
  ...
    -0.153309829172
    -0.120439792559
    -0.00882912480664
dim n=16
  ...
    -0.00696424577339
    -0.00629653092984
    -0.0000322687018600
dim n=32
  ...
    -0.0000228720888610
    -0.0000223192966457
    -0.000000000473007074189
dim n=64
  ...
    -0.000000000331231525320
    -0.000000000330433387110
    -0.000000000000000000108

While we see generally nice convergence with increasing dimension, there is a "step"-effect at the last partial sum (which also reflects an unusual relatively big last term)

Looking at some more of the last coefficients with dim n=64 we see the following
Code:
...
  -0.000000000626200198250
  -0.000000000492336933075
  -0.000000000417440371765
  -0.000000000376261655863
  -0.000000000354008626669
  -0.000000000342186109314
  -0.000000000336009690814
  -0.000000000332835946403
  -0.000000000331231525320
  -0.000000000330433387110
  - - - - - - - - - - - - - - - -
  -0.000000000000000000108 (=  -1.08608090167E-19)
where we nearly get convergence to an error-result (of about 3e-10), which stabilizes for many terms and is only corrected by a jump due to the very last coefficient.

What does this mean if dimension n->infinity: then, somehow, the correction term "is never reached" ?



Well, the deviation of the partial sums from 1 decreases too, so in a rigorous view we may find out, that this effect can indeed be neglected.
But I'd say, that this makes also a qualitative difference for the finite-dimension-based approximations for the superlog/iteration-height by the other known methods for tetration and its inverse.

What do you think?

Gottfried



Code:
b = sqrt(2)
N=64
\\ (...) computation of Bb
tmp = Bb-dV(1.0) ;
corrected = VE(tmp,N-1,1-N); \\ keep first n-1 rows and last n-1 columns

\\ computation for some dimension n<=N
n=64;
tmp=VE(corrected,n-1)^-1;   \\ inverse of the dim-top/left segment of "corrected"

SLOGb = vectorv(n,r,if(r==1,-1,tmp[r-1,1])) \\ shift resulting coefficients; also set "-1" in SLOGb[1]
partsums =  VE(DR,dim)*dV(b,dim)*SLOGb \\ DR provides partial summing

disp = partsums - V(1,dim) \\ partial sums - 1 : but document the last three entries for msg
Gottfried Helms, Kassel
Reply
#32
I told this before but ...

Lets consider the system of equations from the OP.

We need to find v_n.

INSTEAD of truncating to n x n systems and letting n grow , i consider it differently.

We want the radius to be as large as possible.
So we minimize v_0 ^2 + v_1 ^2 + ...
And expect a radius Up to the fixpoints of exp.

We truncate to an n x (n+1) system and min the Sum of squares above for the relevant v_k.

So we take 9 equations with (v_1 ... v_10) and solve for the min of v_1 ^2 + ... v_10.

Then we proceed by adding 11 variables ( v_11,... v_22 ) and solve that system with plug-in the pevious values v_j and 10 equations , and again minimizing the Sum of squares.

Then repeat ...

So

V_1 .. V_10 then v_11 .. V_21 , v_22 ... V_33 etc

So eventually all v_j get solved.

And the equations hold almost at a triangulair number distance of iterztions.

Regards

Tommy1729
Reply
#33
Maybe this was already adressed elsewhere, so if this is so, some kind reader may please link to that entry.

Playing again with the structures of Andy's slog I came to the following observation (so far with base e only, but I think it's trivial to extend).

Consider the power series for slog(z) as given by Andy's descriptions, and define slog0(z) by inserting zero as constant term instead of -1 as in slog(z). slog0(e^^h) gives now h+1 for the argument e^^h.

But moreover, now the series for slog0(z) can formally be inverted.
One can observe, that its coefficients are near that of the series for log(1+z), so let's define the inverse to the slog-function as tetration-function
taylorseries(T0) = serreverse(slog0) - taylorseries(log(1+x))

Then the coefficients of T0() decrease nicely, and we can compute e^^h (best for fractional h in the range -1<h<0 )

e^^h = T0(1+h) + log(2+h)

The series for T0() look much nicer than that of the slog(), but of course the coefficients are directly depending on the accuracy of the coefficients of the slog()-function, so I still used slog with, say, matrix-size of 96 or 128 for a handful of correct digits.

I've never worked with Jay D. Fox's extremely precise solutions for the slog-matrix so I cannot say anything how the coefficients of T0() would change. Would somebody like to check this?

Gottfried

Appendix: 32 Terms of T0(h) taken from slog0(z) with matrixsize of 64
Code:
T0(h) =                     0
         +  0.0917678575394 *h
         +   0.175505903737 *h^2
         +  0.0164995173026 *h^3
         +  0.0191448752458 *h^4
         + 0.00133512590560 *h^5
         + 0.00231496855708 *h^6
       - 0.0000239484773943 *h^7
        + 0.000304699490128 *h^8
       - 0.0000364374120911 *h^9
       + 0.0000455639411655 *h^10
       - 0.0000114433561149 *h^11
      + 0.00000769463425171 *h^12
      - 0.00000262533621718 *h^13
      + 0.00000145886258700 *h^14
     - 0.000000604261454214 *h^15
    +  0.000000301260921078 *h^16
     - 0.000000129633563113 *h^17
    + 0.0000000606527811916 *h^18
    - 0.0000000293680769725 *h^19
    + 0.0000000147234175905 *h^20
   - 0.00000000637812232351 *h^21
   + 0.00000000263672711471 *h^22
   - 0.00000000137069796843 *h^23
  + 0.000000000858320261831 *h^24
  - 0.000000000396925930057 *h^25
        + 7.48739318521E-11 *h^26
        - 1.71265782834E-11 *h^27
        + 7.18443441580E-11 *h^28
        - 5.88967800348E-11 *h^29
        - 7.64569615630E-12 *h^30
        + 2.78278848579E-11 *h^31  
     +O(h^32)

Appendix 2:
Jay D. Fox has provided very accurate coefficients for the slog-function. Using the first 128 of that leading coefficients to recompute T0(h) I arrive at the remarkable solution for e^^pi where 20 digits match Jay's best estimate:
Code:
37149801960.55698549 914478420500428635881  \\ using T0() with n=160 terms: e^^Pi = e^^(3+frac(Pi)) = e^e^e^[T0 (1+ frac(Pi)  )  + log( 2 + frac(Pi))    ]
37149801960.55698549 914478420500428635881  \\ Using T0() with n=128 terms
37149801960.55698549 872339920573987 \\ J.d.Fox to about 25 digits precision; difference at 20. dec digit
thread see at http://math.eretrandre.org/tetrationforu...php?tid=63 "Improving convergence of Andrew's slog"
post see at http://math.eretrandre.org/tetrationforu...920#pid920
The data of the taylor-series for slog with 700 terms are also in that thread.
Gottfried Helms, Kassel
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  Possible continuous extension of tetration to the reals Dasedes 0 1,159 10/10/2016, 04:57 AM
Last Post: Dasedes
  Non-trivial extension of max(n,1)-1 to the reals and its iteration. MphLee 3 3,546 05/17/2014, 07:10 PM
Last Post: MphLee
  extension of the Ackermann function to operators less than addition JmsNxn 2 3,735 11/06/2011, 08:06 PM
Last Post: JmsNxn
  Tetration Extension to Real Heights chobe 3 5,757 05/15/2010, 01:39 AM
Last Post: bo198214
  Tetration extension for bases between 1 and eta dantheman163 16 17,221 12/19/2009, 10:55 AM
Last Post: bo198214
  Extension of tetration to other branches mike3 15 19,124 10/28/2009, 07:42 AM
Last Post: bo198214
  andrew slog tommy1729 1 3,207 06/17/2009, 06:37 PM
Last Post: bo198214
  Dmitrii Kouznetsov's Tetration Extension andydude 38 35,186 11/20/2008, 01:31 AM
Last Post: Kouznetsov
  Hooshmand's extension of tetration andydude 9 9,838 08/14/2008, 04:48 AM
Last Post: Danesh



Users browsing this thread: 2 Guest(s)