 A false interpolation paradigm (?); a reconsideration - Printable Version +- Tetration Forum (https://math.eretrandre.org/tetrationforum) +-- Forum: Tetration and Related Topics (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=1) +--- Forum: Mathematical and General Discussion (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=3) +--- Thread: A false interpolation paradigm (?); a reconsideration (/showthread.php?tid=354) A false interpolation paradigm (?); a reconsideration - Gottfried - 09/16/2009 Note: the following is a pretty long statement which I also want to post in sci.math, so please forgive that I did not put it into Tex-format For the computation of tetration to fractional heights(iterates) I employ the diagonalization of operator matrices. This implements well-known manipulation of the coefficients of formal powerseries; in fact if the base b for tetration is b=exp(exp(-1)) this can be done by matrix-logarithm and if 1oo  f_n(x) = -1.60669515242 +  2*x - 4/9*x^2 + 8/147*x^3 - 16/4725*x^4 + 32/302715*x^5 + O(x^6) which will give correct results for natural exponents but will be false with fractional exponents. The method of interpolation is using the paradigm of polynomial interpolation, which even if generalized to infinite order of polynomials will remain to give false results for fractional exponents. The matrix-method for tetration employs either directly the same interpolation-method (see my discussion on "exponential polynomial interpolation", an ugly term, but I did not find a better one) or in an obscured way (we can express an identity between diagonalization and this interpolation-method). So -possibly- the same way as we need a move from this interpolation-paradigm to arrive at a meaningful series for logarithm, we need a move to arrive at a more meaningful interpolation for fractional tetration. What do you think? Gottfried Helms Note: The original idea of the "false logarithm" was triggered by an article "How Euler did it - a false logarithm series" of Ed Sandifer in MAA-online, where he introduced to a similar analysis discussed by L.Euler http://www.maa.org/editorial/euler/How%20Euler%20Did%20It%2050%20false%20log%20series.pdf I didn't check the actual relation between the interpolation-method here and the Euler-series, but I note that the first coefficient -1.60669... occurs also in that article. For the Euler-paper see: Eneström-index E190. "Consideratio quarumdam serierum quae singularibus proprietatibus sunt praeditae" (“Consideration of some series which are distinguished by special properties”). RE: A false interpolation paradigm (?); a reconsideration - bo198214 - 09/16/2009 The interpolation you describe is what Ansus also uses as Newton/Lagrange interpolation. (The interpolation polynomial is uniquely determined whether you use Newton or Lagrange formula or calculate it directly as solving the matrix equation like you do.) I showed that this is interpolation (which only works for b<=e^(1/e)) is equivalent to the regular iteration at the lower (attracting) fixed point. In so far the interpolation is "correct". There are two problems that occur in your description with the logarithm. 1. the logarithm has no powerseries at 0 (perhaps better try to interpolate log(x+1)) 2. the interpolation method seems only to converge if tends to a limit. Which the logarithm does not satisfy. RE: A false interpolation paradigm (?); a reconsideration - Gottfried - 09/16/2009 (09/16/2009, 11:32 AM)bo198214 Wrote: The interpolation you describe is what Ansus also uses as Newton/Lagrange interpolation. (The interpolation polynomial is uniquely determined whether you use Newton or Lagrange formula or calculate it directly as solving the matrix equation like you do.) I showed that this is interpolation (which only works for b<=e^(1/e)) is equivalent to the regular iteration at the lower (attracting) fixed point. yes, and the wider our "unification-" project reaches, the more methods will be adressed by this consideration... Quote:In so far the interpolation is "correct". Yes, may be my wording is not explanative enough. I didn't want to say it were not correct in so far - the "false" logarithm series is also "so far correct" in its own set of conditions/requirements, it is completely legitimate in that setting: just in terms of an interpolation of some function in x. (we had this argument often: interpolation is only unique modulo some 1-periodic function) But only in the wider view that set of conditions/requirements is not useful for the original intention, which was laid on the conceptually wider idea of a *logarithm*, namely to make log(a*b) = log(a) + log(b) and something more. As we get by regular iteration some correct interpolated results, they satisfy many conditions correctly - but as I also understood our search for uniqueness, there is something more, for which the described method of interpolation is not sufficient. (I hoped Dmitriis integral approach would answer to such wider conditions, perhaps you both get it working soon) Quote:There are two problems that occur in your description with the logarithm. 1. the logarithm has no powerseries at 0 (perhaps better try to interpolate log(x+1)) Ok, I also thought about this; I'll try this later Quote:2. the interpolation method seems only to converge if tends to a limit. Which the logarithm does not satisfy. Hmmm. How far can this argument be related to that interpolation-method in more generality? understand me correctly: I'm not on a search for a new interpolation-method for the logarithm, but I want to make things explicite down to the last "quark" . Then i am hoping to find a key, how such a method could be adapted/corrected/transformed to lead to the "correct" result ("correct" here in the sense for the wider scene) for instance in the easier example (with logarithm) and then see, what would such a correction/ transformation do with our regular iteration. Hmm. I hope I did not create more obfuscation than clearing... [update] ... and that I'm not only on some spleen idea... [/update] Gottfried RE: A false interpolation paradigm (?); a reconsideration - Gottfried - 09/16/2009 (09/16/2009, 11:32 AM)bo198214 Wrote: There are two problems that occur in your description with the logarithm. 1. the logarithm has no powerseries at 0 (perhaps better try to interpolate log(x+1)) I've got Code:f_32(x) = 1.26135017559*x - 0.300463998555*x^2 + 0.0419013176151*x^3 - 0.00288201496505*x^4 + 0.0000960702658021*x^5 + O(x^6)`Don't see anything in the structure of the coefficients (rational numbers, multiples of log(2) and the like) Gottfried RE: A false interpolation paradigm (?); a reconsideration - bo198214 - 09/17/2009 (09/16/2009, 01:59 PM)Gottfried Wrote: Quote:In so far the interpolation is "correct". Yes, may be my wording is not explanative enough. I didn't want to say it were not correct in so far - the "false" logarithm series is also "so far correct" in its own set of conditions/requirements, it is completely legitimate in that setting: just in terms of an interpolation of some function in x. (we had this argument often: interpolation is only unique modulo some 1-periodic function) But only in the wider view that set of conditions/requirements is not useful for the original intention, which was laid on the conceptually wider idea of a *logarithm*, namely to make log(a*b) = log(a) + log(b) and something more. With "correct" I meant here satisfaction of the superfunction equation: . Indeed it is a surprise as I wouldnt expect satisfaction of this equation from some interpolation. But of course it does not suffice for uniqueness, while your logarithmic equation does. Quote:Quote:2. the interpolation method seems only to converge if tends to a limit. Which the logarithm does not satisfy. Hmmm. How far can this argument be related to that interpolation-method in more generality? My understanding is the following: You know there is a theorem that an analytic function is uniquely determined by the values of infinitely many arguments that have an accumulation point in the domain of holomorphy. In our case of having a limit, i.e. , I guess the regular super-exponential is analytic at . So under this condition the sequence of natural numbers as arguments has the limit point inside the domain of holomorphism, and hence is uniquely determined by the values on the natural numbers. The difficult case is where it is not analytic at infinity. However all methods seem to converge to the same thing in this case. These method are all non-interpolative. Here it seems that approaching infinity in a certain angle/sector is bounded. Kinda analyticity not in a whole vicinity of infinity but only when approaching through this sector.