[Update] Comparision of 5 methods of interpolation to continuous tetration
#11
(10/15/2013, 07:14 PM)sheldonison Wrote: Gottfried, I think your solution is interesting. I see from your paper, that
the solution you get is real valued. Presumably, there is some sort of Taylor
series representation for your solution. I would be interested in seeing a
Taylor series at sexp(0) where sexp(0)=1.

The Kneser solution is defined by two basic things; which your solution would
need to match to converge.
1) it is real valued (it sounds like your function is also real valued).
2) the sexp limiting behavior in the complex plane, as imag(z) increases, is
the same as the Schroeder function solution. At imag(z)=0.75i, it is visually
the same, and convergence gets better as imag(z) increases, as defined by a 1-
cyclic scaling function that goes to a constant as imag(z) increases.
- Sheldon

Hi Sheldon -
I've still no real answer to your questions. However one more
observation might lead me into that direction.

Using the 64x64-Carlemanmatrix I got the impression that the
profile of the eigenvalues tend to something known: possibly,
with the index k they approximate to some u^(k^2), and with the
index running from -infinty to +infinity; and this reminds me
then much of your introduction of a theta-series into the
implementation of the Kneser-solution.


The problem that I cannot provide a true implementation of a
Schröder-function and of its inverse, and thus no true sexp and
slog, should be better characterized in the following way:

In the regular tetration we generate a triangular Carleman-matrix,
say C. Diagonalizing is exact and more important, constant for arbitrary
size of the matrix involved. Then we have
C = M * D * W
where D contains in its diagonal the consecutive powers of the multiplier u.
M and W are automatically of the Carleman-type, thus a proper Schröderfunction
s(x) can be built from the coefficients of the second column of M.

The inverse of the Schröderfunction, say "si(x)" can be built by the second
column of W and the consecutive powers of (s(x)*u) - this is just the
implementation of an ordinary power series.

With the polynomial-method we do initially the same: we diagonalize the matrix
C = M * D * W
But now neither M,nor W nor D are of the Carleman type, so we must evaluate
each column of M, getting a set of coefficients (not consecutive powers), then
D has a set of coefficients which seem to be unrelated with each other, and
the same is true with W.
So we dont have a schröderfunction and also not its logarithm the slog() and
we must evaluate the full matrix-product over all columns of W,D and the
second column of W.
Perhaps there is some finetuning possible though.



To the other question: here is a quick&dirty-version of a powerseries in h, which should give the
h'th iterate starting from z0=0:
f(h) = 0.938382809828*h - 0.167828379563*h^2
+ 0.287374412061*h^3 - 0.151317834929*h^4 + 0.152072891321*h^5 - 0.115906539199*h^6
+ 0.104252971656*h^7 - 0.0896484652321*h^8 + 0.0804335682514*h^9 -
0.0722083698851*h^10 + 0.0657781099848*h^11 - 0.0603034448582*h^12 +
0.0557107445178*h^13 - 0.0517589491520*h^14 + 0.0483386436954*h^15 -
0.0453431498049*h^16 + 0.0426994261734*h^17 - 0.0403475539535*h^18 +
0.0382412646892*h^19 - 0.0363431568004*h^20 + 0.0346230698175*h^21 -
0.0330562751755*h^22 + 0.0316223524107*h^23 - 0.0303042715741*h^24 +
0.0290877202845*h^25 - 0.0279605769637*h^26 + 0.0269125034199*h^27 -
0.0259346231153*h^28 + 0.0250192655171*h^29 - 0.0241597604217*h^30 +
0.0233502707006*h^31 - 0.0225856544388*h^32 + 0.0218613494467*h^33 -
0.0211732744878*h^34 + 0.0205177425741*h^35 - 0.0198913824642*h^36 +
0.0192910652121*h^37 - 0.0187138333754*h^38 + 0.0181568314451*h^39 -
0.0176172372736*h^40 + 0.0170921958071*h^41 - 0.0165787582106*h^42 +
0.0160738313658*h^43 - 0.0155741444362*h^44 + 0.0150762403807*h^45 -
0.0145765005325*h^46 + 0.0140712092755*h^47 - 0.0135566632029*h^48 +
0.0130293249244*h^49 - 0.0124860162007*h^50 + 0.0119241389267*h^51 -
0.0113419065535*h^52 + 0.0107385638363*h^53 - 0.0101145703178*h^54 +
0.00947172342934*h^55 - 0.00881320086624*h^56 + 0.00814350875574*h^57 -
0.00746833136793*h^58 + 0.00679428854523*h^59 - 0.00612861721557*h^60 +
0.00547880187039*h^61 - 0.00485218452897*h^62 + 0.00425558671715*h^63 -
0.00369497416472*h^64 + O(h^65)

Here the same for start at z0=1:

f(h) = 1.00000000000 + 1.30087612467*h + 0.613490605591*h^2 + 0.462613730384*h^3 + 0.258026678269*h^4 + 0.163196550177*h^5 + 0.0896232963846*h^6 + 0.0521088351004*h^7 + 0.0278728768121*h^8 + 0.0153988549222*h^9 + 0.00803787843644*h^10 + 0.00428567564173*h^11 + 0.00218991589639*h^12 + 0.00113674868744*h^13 + 0.000570258372500*h^14 + 0.000289775759114*h^15 + 0.000143052447222*h^16 + 0.0000714303563040*h^17 + 0.0000347673520279*h^18 + 0.0000171062518351*h^19 + 0.00000822180736115*h^20 + 0.00000399449536057*h^21 + 0.00000189817814974*h^22 + 0.000000912157817172*h^23 + 0.000000428985593627*h^24 + 0.000000204179146341*h^25 + 0.0000000951111473740*h^26 + 0.0000000448892717431*h^27 + 0.0000000207246449571*h^28 + 0.00000000970923382052*h^29 + 0.00000000444495348205*h^30 + 0.00000000206897297403*h^31 + 0.000000000939569121455*h^32 + 0.000000000434896014592*h^33 + 1.95950745547 E-10*h^34 + 9.02701746055 E-11*h^35 + 4.03579535562 E-11*h^36 + 1.85202423681 E-11*h^37 + 8.21532589062 E-12*h^38 + 3.75895249924 E-12*h^39 + 1.65398790921 E-12*h^40 + 7.55348988152 E-13*h^41 + 3.29536565950 E-13*h^42 + 1.50386648676 E-13*h^43 + 6.50048867954 E-14*h^44 + 2.96864069036 E-14*h^45 + 1.27005340533 E-14*h^46 + 5.81424087541 E-15*h^47 + 2.45836244673 E-15*h^48 + 1.13062762175 E-15*h^49 + 4.71495264346 E-16*h^50 + 2.18452094165 E-16*h^51 + 8.96002942713 E-17*h^52 + 4.19710010287 E-17*h^53 + 1.68674972326 E-17*h^54 + 8.02580312930 E-18*h^55 + 3.14409639911 E-18*h^56 + 1.52906227353 E-18*h^57 + 5.79803950799 E-19*h^58 + 2.90599482712 E-19*h^59 + 1.05635889369 E-19*h^60 + 5.51749837772 E-20*h^61 + 1.89731889651 E-20*h^62 + 1.04844959488 E-20*h^63 + 3.34780766727 E-21*h^64 + O(h^65)

No guarantee that this is correct; I just let Pari/GP expand the symbolic expression of the indicated computation with fixed x=<number> and h indeterminate.

Gottfried
Gottfried Helms, Kassel
#12
(10/16/2013, 12:54 PM)Gottfried Wrote: ....
Here the same for start at z0=1:

f(h) = 1.00000000000 + 1.30087612467*h + 0.613490605591*h^2 + 0.462613730384*h^3 + 0.258026678269*h^4 + 0.163196550177*h^5 + 0.0896232963846*h^6 + 0.0521088351004*h^7 + 0.0278728768121*h^8 + 0.0153988549222*h^9 + 0.00803787843644*h^10 + 0.00428567564173*h^11 + 0.00218991589639*h^12 + 0.00113674868744*h^13 + 0.000570258372500*h^14 + 0.000289775759114*h^15 + 0.000143052447222*h^16 + 0.0000714303563040*h^17 + 0.0000347673520279*h^18 + 0.0000171062518351*h^19 + 0.00000822180736115*h^20 + 0.00000399449536057*h^21 + 0.00000189817814974*h^22 + 0.000000912157817172*h^23 + 0.000000428985593627*h^24 + 0.000000204179146341*h^25 + 0.0000000951111473740*h^26 + 0.0000000448892717431*h^27 + 0.0000000207246449571*h^28 + 0.00000000970923382052*h^29 + 0.00000000444495348205*h^30 + 0.00000000206897297403*h^31 + 0.000000000939569121455*h^32 + 0.000000000434896014592*h^33 + 1.95950745547 E-10*h^34 + 9.02701746055 E-11*h^35 + 4.03579535562 E-11*h^36 + 1.85202423681 E-11*h^37 + 8.21532589062 E-12*h^38 + 3.75895249924 E-12*h^39 + 1.65398790921 E-12*h^40 + 7.55348988152 E-13*h^41 + 3.29536565950 E-13*h^42 + 1.50386648676 E-13*h^43 + 6.50048867954 E-14*h^44 + 2.96864069036 E-14*h^45 + 1.27005340533 E-14*h^46 + 5.81424087541 E-15*h^47 + 2.45836244673 E-15*h^48 + 1.13062762175 E-15*h^49 + 4.71495264346 E-16*h^50 + 2.18452094165 E-16*h^51 + 8.96002942713 E-17*h^52 + 4.19710010287 E-17*h^53 + 1.68674972326 E-17*h^54 + 8.02580312930 E-18*h^55 + 3.14409639911 E-18*h^56 + 1.52906227353 E-18*h^57 + 5.79803950799 E-19*h^58 + 2.90599482712 E-19*h^59 + 1.05635889369 E-19*h^60 + 5.51749837772 E-20*h^61 + 1.89731889651 E-20*h^62 + 1.04844959488 E-20*h^63 + 3.34780766727 E-21*h^64 + O(h^65)

No guarantee that this is correct; I just let Pari/GP expand the symbolic expression of the indicated computation with fixed x=<number> and h indeterminate.

Gottfried
The difference between your taylor series, and the Kneser solution taylor series is reasonably small. All the coefficients differ by less than 2.5^10-6. To me, it does indeed look like the Kneser solution, both in the complex plane and the real axis. Of course, there is the limit in accuracy to 10^-6, which makes it difficult to say if it really is the same function in the limit, but this is much more accurate than some of the other non-equivalent solutions, like the base change solution, or Tommy's 2sinh solution. I don't have the time right now, to dig into your algorithm to understand why it might be the same as the Kneser approach.
- Sheldon
Code:
differenece at sexp(0) where sexp(z)=1
{difference=
        0
+x^ 1*  0.00000182148778692911855
+x^ 2*  0.00000236395128074178906
+x^ 3*  0.0000000364681675375429581
+x^ 4* -0.000000488269140029040618
+x^ 5* -0.000000715322531855772069
+x^ 6* -0.000000894405252808757919
+x^ 7* -0.000000604527450774569225
+x^ 8* -0.000000566747124818283086
+x^ 9* -0.000000325020053083984524
+x^10* -0.000000262436257811905389
+x^11* -0.000000138691634039540176
+x^12* -0.000000101836267062431268
}
{kneser=
        1.00000000000000000
+x^ 1*  1.30087430318221307
+x^ 2*  0.613488241639719258
+x^ 3*  0.462613693915832462
+x^ 4*  0.258027166538140029
+x^ 5*  0.163197265499531856
+x^ 6*  0.0896241907898528088
+x^ 7*  0.0521094396278507746
+x^ 8*  0.0278734435592248183
+x^ 9*  0.0153991799422530840
+x^10*  0.00803814087269781191
+x^11*  0.00428581433336403954
+x^12*  0.00219001773265706243
}
{gottfried=
        1.00000000000000000
+x^ 1*  1.30087612467000000
+x^ 2*  0.613490605591000000
+x^ 3*  0.462613730384000000
+x^ 4*  0.258026678269000000
+x^ 5*  0.163196550177000000
+x^ 6*  0.0896232963846000000
+x^ 7*  0.0521088351004000000
+x^ 8*  0.0278728768121000000
+x^ 9*  0.0153988549222000000
+x^10*  0.00803787843644000000
+x^11*  0.00428567564173000000
+x^12*  0.00218991589639000000
}

I also looked at your Taylor series at 0, which is accurate to 10^-4, so it isn't nearly as accurate. To some extent, this might be because the inherent radius of converegence around sexp(-1)=0 is radius=1, so one must be more careful go get an accurate Taylor series. At sexp(0)=1, the radius of convergence is 2, which is sometimes easier to work with.
- Sheldon
Code:
differnce at sexp(-1) where sexp(z)=0
{differncem1=
        0
+x^ 1*  0.000000356817567666656170
+x^ 2* -0.00000779165247553834254
+x^ 3*  0.0000223778203088194483
+x^ 4* -0.0000392692063114664084
+x^ 5*  0.0000588782841305703862
+x^ 6* -0.0000798939836470416862
+x^ 7*  0.000101728675963082681
+x^ 8* -0.000123762693382960570
+x^ 9*  0.000145474277203847197
+x^10* -0.000166438511665979836
+x^11*  0.000186279179283386476
+x^12* -0.000204687037326049608
}
#13
(10/16/2013, 04:12 PM)sheldonison Wrote: The difference between your taylor series, and the Kneser solution taylor series is reasonably small. All the coefficients differ by less than 2.5^10-6. To me, it does indeed look like the Kneser solution, both in the complex plane and the real axis. (...)
Well, this is going to become ironic somehow... That "polynomial interpolation" was my very first and naive approach to the tetration (see my first messages here in the board) and then I felt its model was too unspecific and did a big exploration until I arrived at the regular tetration with the Carleman-matrix (which I called "Matrixoperator" not knowing they are already well studied) including the series-recentering towards the fixpoint and then eigendecomposition - just to come back to the beginning and meet ol' Hellmuth Kneser there...

Sometimes things go strange...

Gottfried
Gottfried Helms, Kassel
#14
I thought the Kneser method was a riemann mapping from a piece of the brown curve of Gottfried's first 2 pics onto the positive real axis.

But those pics show selfcrossing !

Hence what did I miss about the Kneser method ?

Of course those selfcrossings can be undone by another mapping but then it seems arbitrary and I wonder how to keep the property of analytic ?

Hmm it seems possible to get both by polygon interpolation of the points then map the polygon by riemann mapping to the real line and then sort out the selfcrossing by finding the crossings.

But that seems kind of sloppy.
#15
(10/22/2013, 12:17 PM)tommy1729 Wrote: I thought the Kneser method was a riemann mapping from a piece of the brown curve of Gottfried's first 2 pics onto the positive real axis.

But those pics show selfcrossing !

Hence what did I miss about the Kneser method ?
Hmm, unfortunately I did never understand in which way some mapping -be it Riemann or something else- is introduced into the mathematical model or even only into the computations of values or of the coefficients of the power series for the regular tetration. (I only found lots of web-pages which focus on *the proof* of the mapping theorem in this or that way). So sorry, I cannot be of help here.

Gottfried
Gottfried Helms, Kassel
#16
Here ist a picture which shows the mapping of the small positive part of the imaginary axis in heights of h=1/20 when computed by the "polynomial method" with the diagonalization of the 64x64 Carlemanmatrix.


The coordinates of the initial linesegment are computed by the 21 coordinates at the imaginary axis between 0 and 0.5 I. To avoid logarithms of zero this is a bit translated by adding 1e-4*I to the coordinates.

Then 20 iterations with height of 1/20 are computed for that whole initial line. This gives the red skewed rectangle in the rough area (0+0I,0+0.5I,1,1+0.5*I)
Then that matrix of coordinates is completely mapped by natural iterations of the powertower with base 4.

The whole segment becomes more and more distorted and shrinks when iterated to the complex fixpoint.

What is especially interesting me are the "eyes" /the whitespace in the inner spiral - and what about of regions of overlap in there if my initial linesegment where longer, say up to 0.8*I. This direction would be related by complex iteration heights - but I've not yet reliable computations of that heights.
[attachment=1016]


Attached Files Thumbnail(s)
   
Gottfried Helms, Kassel
#17
(10/27/2013, 11:40 PM)Gottfried Wrote: Here ist a picture which shows the mapping of the small positive part of the imaginary axis .... of the powertower with base 4.

The whole segment becomes more and more distorted and shrinks when iterated to the complex fixpoint.

What is especially interesting me are the "eyes" /the whitespace in the inner spiral - and what about of regions of overlap in there if my initial linesegment where longer, say up to 0.8*I. This direction would be related by complex iteration heights - but I've not yet reliable computations of that heights.
http://math.eretrandre.org/tetrationforu...hp?tid=358 is a link to Jay's post on the Chi-Star, which is the distorted part of the limiting behavior of your graph as well. To get the Chi-Star, one has to take the regular Schroeder function, developed from the complex fixed point, of the real axis of the sexp(z) solution. Then the pattern repeats, scaled. And then each of the "eyes" would extend all the way to infinity in a beautiful complicated recursive pattern. Jay's post also shows the relationship with the chi-star and the region which gets Riemann mapped in Kneser's construction, which Tommy asked about.
- Sheldon
#18
Ok some guts is needed to admit I do not fully understand and to say I find some things not perfectly well explained.

In particular because it sounds stupid and ungrateful , which I am not.

But it needs to be done.

For Kneser's solution alot of attention is going to the Riemann mapping but the " a priori " is not clear to me.

Maybe Im getting old and I am asking question that I have asked before or understood before so plz forgive me if so.

From my experience it is best to ask very specific questions so I will point to the post I find most confusing.

But first a silly question probably , what I will probably know right after I asked :p

About the schroeder equation

F( f(x) ) = q F(x).

Let c be the fixpoint of f(x).

Now it appears to me that F© must be either 0 or infinite.

What usefull stuff can be said if F© = oo ? Or is that completely useless ?

Second , Why do we prefer f ' © = 1 ?

I assume it is ONLY for the easy solvability of the Taylor series or limit formula for the " principal " schroeder function.

Having probably answered those questions somewhat myself , let continue with the MAIN question(s) and the principal schroeder function.

Since we have f © = 0 we thus have a Taylor series expanded at c.

However there is a limit radius of convergence.

And the real line is not included in the radius , so the trouble begins.

I was only able to find ONE POST adressing how to continue before the Riemann mapping ( all the others seemed to be copied or linked to that " mother post " )

It is still not clear to me what exactlty is mapped , what happened to the singularities and if all that does not loose the property of analyticity.

Also Im not sure what kind of " solution " we are suppose to end up with ?? A sexp that has no singularities for Re > 0 ?

What properties are claimed for the Kneser solution ?
Is it only the property of sexp being analytic near the real line for Re > 0 ?

The post that failed to enlighten me was this one :

- with respect to the poster of course -

http://math.eretrandre.org/tetrationforu...hp?tid=213

POST NR 3

It is said : we analyticly continue to ...

HOW ?

What happened to the singularities and limited radius ?

If you use Taylor series you CANNOT have a function that converges on the entire upper plane ; the Taylor series ALWAYS converges in a circle !!?

It is not said how the continuation is done , how we know it is possible , what series expansion we end up with etc etc

So what to make of that ?

I note that mapping a singularity or pole with exp or ln remains a problem ?

Then there follows a claim of simply connected which I find a bit handwaving ?? And what if it contains singularities ??

How is this different from Gottfriend's brown curve ?

I also note that the Riemann mapping may not change the functional equation.

Those 3 pics do not explain all that and perhaps a longer post should have been made.

With respect to the efforts though.

I hope I have sketched what I believe confuses most people about Kneser's method.

regards

tommy1729

#19
The only thing I can come up with is that :

1) we simply take the principal schroeder function near the fixpoint c.

2) in that domain we use analytic continuation ( monodromy ) so that by small radiuses recentered we can reach any value in the upper plane.

3) In that upper plane we find ( after many continuations ) the values equalling the reals between 0 and 1. ( and those are connected )
( for reasons yet to be explained !)

4) the functional equation still holds after the analytic continuation ( for reasons yet to be explained )

5) We do a Riemann mapping from those values ( reals between 0 and 1) onto the reals between 0 till 1. (we connect 1 to 1)

6) in 5) the functional equation still holds ( again for reasons yet to be explained since we are outside the radius of convergeance of c and did a mapping ! )

7) by swcharz reflection and analytic continuation we our desired real to real schroeder function.

Cool by using log we get the desired abel function.

9) take the functional inverse of the abel function ( series reversion maybe ).

10) we have sexp analytic near the positive real line.

I probably got it wrong or didnt I ?

And if I got It right , there is still alot to be explained !!

regards

tommy1729
#20
Another detail is that we need to show the schroeder function HAS analytic continuation in the first place , in other words , no essential singularities and no natural boundaries !

But maybe that is the easy part.

regards

tommy1729


Possibly Related Threads…
Thread Author Replies Views Last Post
  numerical methods with triple exp convergeance ? tommy1729 1 540 03/27/2023, 03:39 AM
Last Post: JmsNxn
  Axiomizing different methods Daniel 0 653 09/29/2022, 10:01 AM
Last Post: Daniel
  Qs on extension of continuous iterations from analytic functs to non-analytic Leo.W 18 6,648 09/18/2022, 09:37 PM
Last Post: tommy1729
Question Continuous Hyper Bouncing Factorial Catullus 9 3,292 08/15/2022, 07:54 AM
Last Post: JmsNxn
  Unifying continuous and discrete physics Daniel 0 661 07/31/2022, 01:26 PM
Last Post: Daniel
  A related discussion on interpolation: factorial and gamma-function Gottfried 9 21,480 07/10/2022, 06:23 AM
Last Post: Gottfried
  Question about tetration methods Daniel 17 6,280 06/22/2022, 11:27 PM
Last Post: tommy1729
  Small research update MphLee 2 2,151 10/26/2021, 12:22 AM
Last Post: MphLee
  My interpolation method [2020] tommy1729 1 4,176 02/20/2020, 08:40 PM
Last Post: tommy1729
  Possible continuous extension of tetration to the reals Dasedes 0 3,770 10/10/2016, 04:57 AM
Last Post: Dasedes



Users browsing this thread: 1 Guest(s)