[Update] Comparision of 5 methods of interpolation to continuous tetration
#1
I've just updated my discussion from 2010 where I provided pictures and short commentars for the basic introduction into different interpolation proposals for the tetration. I included now also the comaprision with the Kneser-method, where I used one of the Pari/GP-scripts which Sheldon has kindly provided here.
Here is the link:

http://go.helms-net.de/math/tetdocs/Comp...ations.pdf

I'll attach it also here for the possibility that some website might drop down...
[updates]: included comparisions of the Kneser with the polynomial 32x32, polynomial 48x48 and polynomial 64x64 - interpolations.
Impression/conclusion:The bigger the matrix-size, the better the Kneser solution is approximated.
[/end update]

Ahh, ps: I would like it much to include more material of someone else, who has some other practical procedure and can provide data for the same environment (of base b=4, and the 1/20 to 1/40-step iterations with the given initial values) such that I can include them in my Excel-tables for plotting.


Have fun -

Gottfried

see for more material: http://go.helms-net.de/math/tetdocs/


Attached Files
.pdf   ComparisionOfInterpolations.pdf (Size: 352.81 KB / Downloads: 884)
Gottfried Helms, Kassel
#2
Gottfried, your math looks difficult to understand for an amateur in continuous functional iteration, but has very interesting pictures nonetheless. I never was good at complex analysis, otherwise I might have a chance to understand. The presentation in your papers is very nice Smile
#3
(10/14/2013, 10:42 AM)MikeSmith Wrote: Gottfried, your math looks difficult to understand for an amateur in continuous functional iteration, but has very interesting pictures nonetheless. I never was good at complex analysis, otherwise I might have a chance to understand. The presentation in your papers is very nice Smile

Hi Mike - thank you for your kind response. Well, it sounds strange for me to hear, that my math is too complex. You know, I've started all this as a complete amateur, just accidentally wenn I looked at properties of the Pascalmatrix, and power of the Pascalmatrix, and... Then by a very natural and naive interest in the question, what happens if I apply the same procedures to the matrix of Stirlingnumbers I fell in... yes, the area of powertowers. And from there, by considering: how in the world could fractional powers of such matrices be constructed?, to the powertowers with fractional heights, aka tetration.

Perhaps after so many years of intense work with it my language became more sophisticated and/or overly jargon-ned ... I'd always liked to make that steps into the matter transparent for other interested ones and as joyful/exciting as they were for me - if you (or someone else here) is interested in a conversation here to make my approach better understandable for an amateur then let me know. Possibly it needs only rewriting of my first articles, or a collaborative rewriting of them?

Well, Mike, just share your thoughts -

Gottfried
Gottfried Helms, Kassel
#4
(10/14/2013, 01:19 PM)Gottfried Wrote:
(10/14/2013, 10:42 AM)MikeSmith Wrote: Gottfried, your math looks difficult to understand for an amateur in continuous functional iteration, but has very interesting pictures nonetheless. I never was good at complex analysis, otherwise I might have a chance to understand. The presentation in your papers is very nice Smile

Hi Mike - thank you for your kind response. Well, it sounds strange for me to hear, that my math is too complex. You know, I've started all this as a complete amateur, just accidentally wenn I looked at properties of the Pascalmatrix, and power of the Pascalmatrix, and... Then by a very natural and naive interest in the question, what happens if I apply the same procedures to the matrix of Stirlingnumbers I fell in... yes, the area of powertowers. And from there, by considering: how in the world could fractional powers of such matrices be constructed?, to the powertowers with fractional heights, aka tetration.

Perhaps after so many years of intense work with it my language became more sophisticated and/or overly jargon-ned ... I'd always liked to make that steps into the matter transparent for other interested ones and as joyful/exciting as they were for me - if you (or someone else here) is interested in a conversation here to make my approach better understandable for an amateur then let me know. Possibly it needs only rewriting of my first articles, or a collaborative rewriting of them?

Well, Mike, just share your thoughts -

Gottfried

thanks for explaining some of the prerequisite knowledge.
I should spend some time on this Wikipedia page
http://en.wikipedia.org/wiki/List_of_matrices
to get more familiar with ideas about matrices
Shy
#5
(10/13/2013, 02:20 PM)Gottfried Wrote: I've just updated my discussion from 2010 where I provided pictures and short commentars for the basic introduction into different interpolation proposals for the tetration. I included now also the comaprision with the Kneser-method, where I used one of the Pari/GP-scripts which Sheldon has kindly provided here.
Here is the link:

http://go.helms-net.de/math/tetdocs/Comp...ations.pdf

Gottfried,
I was able to reproduce your graph on page 9, by plotting sexp(z+k), where \( k=1+\text{slog}(0.1i) \approx -0.002+0.107i \), and z varied from -7 to 1. You might try other values of k. As imag(k) increases, the Kneser technique behaves more and more like the Schroeder function. For example, try k=0.5i, graphing sexp(z+k) from z=-7 to z=1, and you see a very nice well defined spiral towards to fixed point, just like the Schroeder function solution. Here interpolation works very well, since interpolation naturally approaches the Schroeder function solution.

Closer to the real axis, the singularities at integer values <=-2 become more and more pronounced, which causes problems for interpolation.
- Sheldon
#6
Hi Sheldon -

thanks for your comment. What remains for me is, whether my impression, that the Kneser-method and the polynomial-method ("quick&dirty-eigendecomposition") approximate if I increase the size of the truncation of the Carlemanmatrix. That would be a really remarkable statement!

It would raise the subsequent question, whether the triangular Carleman-matrix, which results from the fixpoint-shift and gives the basis for the regular tetration, or its triangular eigenmatrices, simply need some completion factor, perhaps something like the integral in the Ramanujan-summation for the divergent series to agree with the two other results... I am in search of such a thing for long but with no avail so far - perhaps from the current observation one can get a hint where to dig?

Gottfried
Gottfried Helms, Kassel
#7
(10/14/2013, 08:22 PM)MikeSmith Wrote: http://en.wikipedia.org/wiki/List_of_matrices
to get more familiar with ideas about matrices
Hi Mike -

wow, the wikipedia-list is long...
Well, for me/for the Carlemanmatrix-approach the following are relevant:

* diagonal,
* subdiagonal,
* triangular and
* square
shape ("which parts of the matrix are not 'systematically' zero?")

Matrices with constant entries:

* Triangular with ones filled in,
* Pascalmatrix (=lower or upper triangular with binomial entries),
* Stirlingmatrices S1,S2 = first and second kind
* Standard Vandermondematrix VZ

Matrices/vectors with variable entries:

* Vandermonde vector of argument x: V(x) = [1,x,x^2,x^3,...], its diagonal ("dV(x)") or row or column form
* Vandermonde matrix - some collection of Vandermondevectors, for instance VZ = [V(0),V(1),V(2),V(3),...]
* Z-vector of exponent argument Z(w) = [1,1/2^w, 1/3^w, 1/4^w, ...] for work with dirichletseries and derivatives
* Factorial vector of exponent argument w: F(w)= [0!^w,1!^w,2!^w,...] for work with exponential series

In principle, this is it.

<hr>
From this the relevant Carlemanmatrices can easily be constructed:

Carleman-type matrices C for some function f(x) have a contents such that V(x) * C = V(f(x)) .

This means, that not only like with a function (defined by a power series) a set of coefficients and consecutive powers of an argument x give one single value (the function-values) , instead we here get all consecutive powers of the function-value, such that "input" and "output" of the Carleman-operation has the same shape: the vandermonde-vector V(.).
Then the operation/the defined function can be iterated :
* V(x)*C = V(f(x)),
* V(f(x))*C=V(f(f(x)))
and so on - just by powers of the Carleman matrix.

Simple Carlemanmatrices are:

* The identitymatrix: for f(x) = x : V(x) * I = V(x) = V(f(x))

* The (upper triangular) pascalmatrix for f(x) = x+1: V(x) * P = V(x+1)
(its inverse gives V(x) * P^-1 = V(x-1); this is the operation of "inc"
* its powers gives the iteration/operation of addition: V(x)*P^h = V(x+h)
(this is the operation of "add")


* any diagonal Vandermonde-vector dV(a) for f(x) = a*x: V(x) * dV(a) = V(a*x)
(this is the operation of "mul")

* The S2-matrix, "similarity-scaled" by the factorials: fS2F = dF(-1)*S2*dF(1) then f(x)= exp(x)-1 : V(x) * fS2F = V(exp(x)-1)
(this is the operation of "exp-1")

* The S1-matrix similarity-scaled like the S2 then f(x) = log(1+x) : V(x) * fS1F = V(log(1+x))
(this is the operation of "log+1")

This is the basic material.

Then matrixmultiplication, inversion, diagonalization come into play where we must be aware, that practically we work with truncated matrices and their dot-products are only ideally series but practically polynomials with finitely many coefficients, and we must take care,that our results and our generalizations are justified in that they are arbitrary accurate approximations of the ideal result when taken by the infinite matrix.

And so on... For a longer list of matrices and more ot the relevant properties you could look at my older sub-pages for the compilation of matrices related to binomial- and bernoulli-coefficients on my hompage http://go.helms-net.de/math/binomial_new/index.htm

One more remark: with my newer experiences many fomulations there sound awfully naive and imprecise today, but it was pure explorative couriosity when I collected that informations. Simply skimming over it today might still give a sense for that, what I'm doing with that matrices for tetration, even when more experienced and sophisticated today.

One of my real achievements then was the rediscovering of the Bernoulli-polynomials by the matrix approach to the problem of sums of like powers: I assumed that sums of like powers can be seen as sums of iterations from x^m to (x+1)^m and the Carleman-matrix for this is simply the Pascalmatrix P. So simply replacing P by fS2F means replacing the methods for the sums-of-like-powers and Bernoulli-polynomials by the methods for tetration using the same scheme... So the article about "sums-of-like-powers" is also a good source to see what I'm doing here - and has the advantage, that the infinite iteration , series of iteration and even fractional iteration is or can be finally solved...

Gottfried
Gottfried Helms, Kassel
#8
Update of the document:
Update 2 of the document:
I've just included an image, where the comparision of Kneser with Polynomial interpolation based on 48x48-matrices was done. 2nd Update includes the even better approximation using 64x64-matrices.

As expected the bigger size makes a substantial better coincidence of the curves. The updated picture is the last one in the document.

Here is the link again:

http://go.helms-net.de/math/tetdocs/Comp...ations.pdf

Gottfried Helms, Kassel
#9
Gottfried, I think your solution is interesting. I see from your paper, that the solution you get is real valued. Presumably, there is some sort of Taylor series representation for your solution. I would be interested in seeing a Taylor series at sexp(0) where sexp(0)=1.

The Kneser solution is defined by two basic things; which your solution would need to match to converge.
1) it is real valued (it sounds like your function is also real valued).
2) the sexp limiting behavior in the complex plane, as imag(z) increases, is the same as the Schroeder function solution. At imag(z)=0.75i, it is visually the same, and convergence gets better as imag(z) increases, as defined by a 1-cyclic scaling function that goes to a constant as imag(z) increases.
- Sheldon
#10
Hi Sheldon,

first to your second question, number 1): yes, the function is real on the real axis because we use just the eigendecomposition of a real matrix - no complex coefficients anywhere.
About number 2) I've no idea yet, perhaps its too late for me tonight...
I'll come back to this tomorrow.

We do not have a true Schröder- and inverse Schröder-function because the diagonal after the diagonalization (the set of eigenvalues) does not contain the consecutive powers of the log of the fixpoint but are more or less arbitrary - depending on the size of the matrix. So I don't see at the moment how to model the sexp() and inverse sexp()-functionality.


Gottfried

Here is code to reproduce the behaviour - no optimization is made, just the basic functionality. If you want to exceed the range of convergence of the "power series" you should add explicite iteration over the integer-part of the heights.
Code:
n=24   \\ set size for matrices, note this needs ~ 200 digits real precision,
       \\ n=64 needs about 4000 digits precision and 2 hours to be computed!
default(precision,400) \\ when n~ 24

\\ procedures ===========================================================
\\    several global variables/matrices will be created/changed
{init(base,matsize,prec) = local(a);
   n=matsize;  \\ set size for matrices, note size=24x24 needs ~ 200 digits real precision,
               \\ n=64 needs about 4000 digits precision and 2 hours to be computed!
   default(precision,prec,1); \\ when n~ 24 then prec ~ 200



      b = base;bl=log(b); \\ sets the base for tetration x-> 4^x
                    \\ bl must have the required precision depending on matrix-size n
                    \\ thus must be recomputed if size and precision is increased!

      B = matrix(n,n,r,c,((c-1)*bl)^(r-1)/(r-1)!) ;
                 \\ the Carlemanmatrix for x-> b^x
                 \\ the resulting coefficients for the power-series
                 \\ are in the second column

     \\ Diagonalization such that B = M * D * W (where W=M^-1)
      M = mateigen(B);
      W = M^-1 ;
      D = diag(W * B * M);

      POLY= M * matdiagonal(W[,2])  \\ this matrix is a "kernel" which needs only the parameter
                    \\ for the height to give the associated power series

      print("matrices are created");
   return(1);}


f(x) = sum(k=0,n-1,x^k * B[1+k,2])  \\ just to show how this works for the
                                    \\ function f(x) itself



        \\ then we can the h'th fractional power  of Bb approximate by M * D^h * W (= B^h)
         make_powerseries(h) = return( POLY * vectorv(n,r, D[r]^h) );

{ tet_polynomial(x,h) = local(pc);
       pc = POLY*vectorv(n, r , D[r]^h ) ;  \\ creates coefficients for power series
       result = sum(k=0,n-1, x^k * pc[1+k]);  \\ evaluates the powerseries
       return(result);}



\\ note that M and W do not really implement the Schröder-functions because
\\ the entries in D are not consecutive powers of one argument


\\ then you start:=================================================================

init(4,24,200) \\ for matrixsize 24x24 you'll need at least 200 digits precision

x0 = 0.1*I
x05= tet_polynomial( x0 , 0.5 )   \\ halfiterate from x0=0.1*I
x1 = tet_polynomial( x05, 0.5 )   \\ halfiterate from x05 should equal one whole iteration:

x1 - b^x0    \\ check the difference  

\\ bigger matrixsize
init(4,32,600) \\ for matrixsize 32x32 you'll need at least 600 digits precision

x0 = 0.1*I
x05= tet_polynomial( x0 , 0.5 )   \\ halfiterate from x0=0.1*I
x1 = tet_polynomial( x05, 0.5 )   \\ halfiterate from x05 should equal one whole iteration:

x1 - b^x0    \\ check the difference  


\\ bigger matrixsize
init(4,64,4000) \\ for matrixsize 64x64 you'll need at least 4000 digits precision
                 \\ this needs 2 hours for computation!
x0 = 0.1*I
x05= tet_polynomial( x0 , 0.5 )   \\ halfiterate from x0=0.1*I
x1 = tet_polynomial( x05, 0.5 )   \\ halfiterate from x05 should equal one whole iteration:

x1 - b^x0    \\ check the difference


Gottfried Helms, Kassel


Possibly Related Threads…
Thread Author Replies Views Last Post
  numerical methods with triple exp convergeance ? tommy1729 1 540 03/27/2023, 03:39 AM
Last Post: JmsNxn
  Axiomizing different methods Daniel 0 653 09/29/2022, 10:01 AM
Last Post: Daniel
  Qs on extension of continuous iterations from analytic functs to non-analytic Leo.W 18 6,650 09/18/2022, 09:37 PM
Last Post: tommy1729
Question Continuous Hyper Bouncing Factorial Catullus 9 3,294 08/15/2022, 07:54 AM
Last Post: JmsNxn
  Unifying continuous and discrete physics Daniel 0 661 07/31/2022, 01:26 PM
Last Post: Daniel
  A related discussion on interpolation: factorial and gamma-function Gottfried 9 21,480 07/10/2022, 06:23 AM
Last Post: Gottfried
  Question about tetration methods Daniel 17 6,283 06/22/2022, 11:27 PM
Last Post: tommy1729
  Small research update MphLee 2 2,151 10/26/2021, 12:22 AM
Last Post: MphLee
  My interpolation method [2020] tommy1729 1 4,177 02/20/2020, 08:40 PM
Last Post: tommy1729
  Possible continuous extension of tetration to the reals Dasedes 0 3,770 10/10/2016, 04:57 AM
Last Post: Dasedes



Users browsing this thread: 1 Guest(s)