Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
[Update] Comparision of 5 methods of interpolation to continuous tetration
#10
Hi Sheldon,

first to your second question, number 1): yes, the function is real on the real axis because we use just the eigendecomposition of a real matrix - no complex coefficients anywhere.
About number 2) I've no idea yet, perhaps its too late for me tonight...
I'll come back to this tomorrow.

We do not have a true Schröder- and inverse Schröder-function because the diagonal after the diagonalization (the set of eigenvalues) does not contain the consecutive powers of the log of the fixpoint but are more or less arbitrary - depending on the size of the matrix. So I don't see at the moment how to model the sexp() and inverse sexp()-functionality.


Gottfried

Here is code to reproduce the behaviour - no optimization is made, just the basic functionality. If you want to exceed the range of convergence of the "power series" you should add explicite iteration over the integer-part of the heights.
Code:
n=24   \\ set size for matrices, note this needs ~ 200 digits real precision,
       \\ n=64 needs about 4000 digits precision and 2 hours to be computed!
default(precision,400) \\ when n~ 24

\\ procedures ===========================================================
\\    several global variables/matrices will be created/changed
{init(base,matsize,prec) = local(a);
   n=matsize;  \\ set size for matrices, note size=24x24 needs ~ 200 digits real precision,
               \\ n=64 needs about 4000 digits precision and 2 hours to be computed!
   default(precision,prec,1); \\ when n~ 24 then prec ~ 200



      b = base;bl=log(b); \\ sets the base for tetration x-> 4^x
                    \\ bl must have the required precision depending on matrix-size n
                    \\ thus must be recomputed if size and precision is increased!

      B = matrix(n,n,r,c,((c-1)*bl)^(r-1)/(r-1)!) ;
                 \\ the Carlemanmatrix for x-> b^x
                 \\ the resulting coefficients for the power-series
                 \\ are in the second column

     \\ Diagonalization such that B = M * D * W (where W=M^-1)
      M = mateigen(B);
      W = M^-1 ;
      D = diag(W * B * M);

      POLY= M * matdiagonal(W[,2])  \\ this matrix is a "kernel" which needs only the parameter
                    \\ for the height to give the associated power series

      print("matrices are created");
   return(1);}


f(x) = sum(k=0,n-1,x^k * B[1+k,2])  \\ just to show how this works for the
                                    \\ function f(x) itself



        \\ then we can the h'th fractional power  of Bb approximate by M * D^h * W (= B^h)
         make_powerseries(h) = return( POLY * vectorv(n,r, D[r]^h) );

{ tet_polynomial(x,h) = local(pc);
       pc = POLY*vectorv(n, r , D[r]^h ) ;  \\ creates coefficients for power series
       result = sum(k=0,n-1, x^k * pc[1+k]);  \\ evaluates the powerseries
       return(result);}



\\ note that M and W do not really implement the Schröder-functions because
\\ the entries in D are not consecutive powers of one argument


\\ then you start:=================================================================

init(4,24,200) \\ for matrixsize 24x24 you'll need at least 200 digits precision

x0 = 0.1*I
x05= tet_polynomial( x0 , 0.5 )   \\ halfiterate from x0=0.1*I
x1 = tet_polynomial( x05, 0.5 )   \\ halfiterate from x05 should equal one whole iteration:

x1 - b^x0    \\ check the difference  

\\ bigger matrixsize
init(4,32,600) \\ for matrixsize 32x32 you'll need at least 600 digits precision

x0 = 0.1*I
x05= tet_polynomial( x0 , 0.5 )   \\ halfiterate from x0=0.1*I
x1 = tet_polynomial( x05, 0.5 )   \\ halfiterate from x05 should equal one whole iteration:

x1 - b^x0    \\ check the difference  


\\ bigger matrixsize
init(4,64,4000) \\ for matrixsize 64x64 you'll need at least 4000 digits precision
                 \\ this needs 2 hours for computation!
x0 = 0.1*I
x05= tet_polynomial( x0 , 0.5 )   \\ halfiterate from x0=0.1*I
x1 = tet_polynomial( x05, 0.5 )   \\ halfiterate from x05 should equal one whole iteration:

x1 - b^x0    \\ check the difference


Gottfried Helms, Kassel
Reply


Messages In This Thread
RE: [Update] Comparision of 5 methods of interpolation to continuous tetration - by Gottfried - 10/15/2013, 11:14 PM

Possibly Related Threads...
Thread Author Replies Views Last Post
  Possible continuous extension of tetration to the reals Dasedes 0 1,350 10/10/2016, 04:57 AM
Last Post: Dasedes
  Tribonacci interpolation ? tommy1729 0 1,934 09/08/2014, 10:37 AM
Last Post: tommy1729
  How many methods have this property ? tommy1729 1 2,494 05/22/2014, 04:56 PM
Last Post: sheldonison
  (MSE): Comparision of powertowers -.Possibly interesting thread in MSE Gottfried 0 1,925 05/22/2013, 07:02 AM
Last Post: Gottfried
  [UFO] - a contradiction in assuming continuous tetration? Gottfried 18 20,875 08/29/2010, 08:44 PM
Last Post: Gottfried
  Self tetraroot constructed via Newton series interpolation mike3 2 6,547 07/11/2010, 03:38 AM
Last Post: mike3
  Borel summation and other continuation/summability methods for continuum sums mike3 2 5,195 12/30/2009, 09:51 PM
Last Post: mike3
  A false interpolation paradigm (?); a reconsideration Gottfried 4 7,306 09/17/2009, 08:17 AM
Last Post: bo198214
  exponential polynomial interpolation Gottfried 3 7,028 07/16/2008, 10:32 PM
Last Post: andydude
  A related discussion on interpolation: factorial and gamma-function Gottfried 6 10,377 06/27/2008, 06:38 PM
Last Post: Gottfried



Users browsing this thread: 1 Guest(s)