• 2 Vote(s) - 3 Average
• 1
• 2
• 3
• 4
• 5
 Andrew Robbins' Tetration Extension Gottfried Ultimate Fellow Posts: 767 Threads: 119 Joined: Aug 2007 12/28/2009, 05:21 PM (This post was last modified: 12/28/2009, 07:19 PM by Gottfried.) There is one question open for me with this computation, maybe it is dealt elsewhere. (In my matrix-notation) the coefficients for the slog-function to some base b are taken from the SLOGb-vector according to the idea (I - Bb)*SLOGb = [0,1,0,0,...] where I is the identity operator and Bb the operator which performs x->b^x Because the matrix (I - Bb) is not invertible, Andrew's proposal is to remove the empty first column - and the last(!) row to make it invertible, let's call this "corrected(I - Bb)" Then the coefficients for the slog-powerseries are taken from SLOGb, from a certain sufficiently approximate finite-dimension solution for SLOGb = (corrected(I - Bb))^-1*[0,1,0,0,...] and because the coefficients stabilize for higher dimension, the finite SLOGb is taken as a meaningful and also valid approximate to the true (infinite dimensional) SLOGb . Btw, this approach resembles the problem of the iteration series for powertowers in a nice way: (I - Bb)^-1 would be a representation for I+Bb + BB^2 + BB^3 + ... which could then be used for the iteration-series of h^^b whith increasing heights h. Obviously such a discussion needed some more consideration because we deal with a nasty divergent series here, so let's leave this detail here. The detail I want to point out is the following. Consider the coefficients in the SLOGb vector. If we use a "nice" base, say b=sqrt(2), then for dimension=n the coefficients at k=0..n-1 decrease when k approaches n-2, but finally, at k=n-1, one relatively big coefficient follows, which supplies then the needed value for a good approximation of the order-n-polynomial for the slog - a suspicious effect! This can also be seen with the partial sums; for the slog_b(b)-slog_b(1) we should get partial sums which approach 1. Here I document the deviation of the partial sums from the final value 1 at the last three terms of the n'th-order slog_b-polynomial (For crosscheck see Pari/GP excerpt at end of msg) Examples, always the ("partial sums" - 1) up to terms at k=n-3, k=n-2,k=n-1 are given, for some dimension n Code:```dim n=4   ...     -0.762957567623     -0.558724904310     -0.150078240781 dim n=8   ...     -0.153309829172     -0.120439792559     -0.00882912480664 dim n=16   ...     -0.00696424577339     -0.00629653092984     -0.0000322687018600 dim n=32   ...     -0.0000228720888610     -0.0000223192966457     -0.000000000473007074189 dim n=64   ...     -0.000000000331231525320     -0.000000000330433387110     -0.000000000000000000108``` While we see generally nice convergence with increasing dimension, there is a "step"-effect at the last partial sum (which also reflects an unusual relatively big last term) Looking at some more of the last coefficients with dim n=64 we see the following Code:```...   -0.000000000626200198250   -0.000000000492336933075   -0.000000000417440371765   -0.000000000376261655863   -0.000000000354008626669   -0.000000000342186109314   -0.000000000336009690814   -0.000000000332835946403   -0.000000000331231525320   -0.000000000330433387110   - - - - - - - - - - - - - - - -   -0.000000000000000000108 (=  -1.08608090167E-19)```where we nearly get convergence to an error-result (of about 3e-10), which stabilizes for many terms and is only corrected by a jump due to the very last coefficient. What does this mean if dimension n->infinity: then, somehow, the correction term "is never reached" ? Well, the deviation of the partial sums from 1 decreases too, so in a rigorous view we may find out, that this effect can indeed be neglected. But I'd say, that this makes also a qualitative difference for the finite-dimension-based approximations for the superlog/iteration-height by the other known methods for tetration and its inverse. What do you think? Gottfried Code:```b = sqrt(2) N=64 \\ (...) computation of Bb tmp = Bb-dV(1.0) ; corrected = VE(tmp,N-1,1-N); \\ keep first n-1 rows and last n-1 columns \\ computation for some dimension n<=N n=64; tmp=VE(corrected,n-1)^-1;   \\ inverse of the dim-top/left segment of "corrected" SLOGb = vectorv(n,r,if(r==1,-1,tmp[r-1,1])) \\ shift resulting coefficients; also set "-1" in SLOGb[1] partsums =  VE(DR,dim)*dV(b,dim)*SLOGb \\ DR provides partial summing disp = partsums - V(1,dim) \\ partial sums - 1 : but document the last three entries for msg``` Gottfried Helms, Kassel tommy1729 Ultimate Fellow Posts: 1,372 Threads: 336 Joined: Feb 2009 08/18/2016, 12:29 PM I told this before but ... Lets consider the system of equations from the OP. We need to find v_n. INSTEAD of truncating to n x n systems and letting n grow , i consider it differently. We want the radius to be as large as possible. So we minimize v_0 ^2 + v_1 ^2 + ... And expect a radius Up to the fixpoints of exp. We truncate to an n x (n+1) system and min the Sum of squares above for the relevant v_k. So we take 9 equations with (v_1 ... v_10) and solve for the min of v_1 ^2 + ... v_10. Then we proceed by adding 11 variables ( v_11,... v_22 ) and solve that system with plug-in the pevious values v_j and 10 equations , and again minimizing the Sum of squares. Then repeat ... So V_1 .. V_10 then v_11 .. V_21 , v_22 ... V_33 etc So eventually all v_j get solved. And the equations hold almost at a triangulair number distance of iterztions. Regards Tommy1729 Gottfried Ultimate Fellow Posts: 767 Threads: 119 Joined: Aug 2007 08/22/2016, 04:19 PM (This post was last modified: 08/22/2016, 09:50 PM by Gottfried.) Maybe this was already adressed elsewhere, so if this is so, some kind reader may please link to that entry. Playing again with the structures of Andy's slog I came to the following observation (so far with base e only, but I think it's trivial to extend). Consider the power series for slog(z) as given by Andy's descriptions, and define slog0(z) by inserting zero as constant term instead of -1 as in slog(z). slog0(e^^h) gives now h+1 for the argument e^^h. But moreover, now the series for slog0(z) can formally be inverted. One can observe, that its coefficients are near that of the series for log(1+z), so let's define the inverse to the slog-function as tetration-function taylorseries(T0) = serreverse(slog0) - taylorseries(log(1+x)) Then the coefficients of T0() decrease nicely, and we can compute e^^h (best for fractional h in the range -1

 Possibly Related Threads... Thread Author Replies Views Last Post Possible continuous extension of tetration to the reals Dasedes 0 1,601 10/10/2016, 04:57 AM Last Post: Dasedes Non-trivial extension of max(n,1)-1 to the reals and its iteration. MphLee 3 4,429 05/17/2014, 07:10 PM Last Post: MphLee extension of the Ackermann function to operators less than addition JmsNxn 2 4,598 11/06/2011, 08:06 PM Last Post: JmsNxn Tetration Extension to Real Heights chobe 3 6,830 05/15/2010, 01:39 AM Last Post: bo198214 Tetration extension for bases between 1 and eta dantheman163 16 21,494 12/19/2009, 10:55 AM Last Post: bo198214 Extension of tetration to other branches mike3 15 23,534 10/28/2009, 07:42 AM Last Post: bo198214 andrew slog tommy1729 1 3,790 06/17/2009, 06:37 PM Last Post: bo198214 Dmitrii Kouznetsov's Tetration Extension andydude 38 42,305 11/20/2008, 01:31 AM Last Post: Kouznetsov Hooshmand's extension of tetration andydude 9 11,591 08/14/2008, 04:48 AM Last Post: Danesh

Users browsing this thread: 1 Guest(s)