• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 A false interpolation paradigm (?); a reconsideration Gottfried Ultimate Fellow     Posts: 787 Threads: 121 Joined: Aug 2007 09/16/2009, 11:14 AM (This post was last modified: 09/16/2009, 02:33 PM by Gottfried.) Note: the following is a pretty long statement which I also want to post in sci.math, so please forgive that I did not put it into Tex-format For the computation of tetration to fractional heights(iterates) I employ the diagonalization of operator matrices. This implements well-known manipulation of the coefficients of formal powerseries; in fact if the base b for tetration is b=exp(exp(-1)) this can be done by matrix-logarithm and if 1oo  f_n(x) = -1.60669515242 +  2*x - 4/9*x^2 + 8/147*x^3 - 16/4725*x^4 + 32/302715*x^5 + O(x^6) which will give correct results for natural exponents but will be false with fractional exponents. The method of interpolation is using the paradigm of polynomial interpolation, which even if generalized to infinite order of polynomials will remain to give false results for fractional exponents. The matrix-method for tetration employs either directly the same interpolation-method (see my discussion on "exponential polynomial interpolation", an ugly term, but I did not find a better one) or in an obscured way (we can express an identity between diagonalization and this interpolation-method). So -possibly- the same way as we need a move from this interpolation-paradigm to arrive at a meaningful series for logarithm, we need a move to arrive at a more meaningful interpolation for fractional tetration. What do you think? Gottfried Helms Note: The original idea of the "false logarithm" was triggered by an article "How Euler did it - a false logarithm series" of Ed Sandifer in MAA-online, where he introduced to a similar analysis discussed by L.Euler http://www.maa.org/editorial/euler/How%2...series.pdf I didn't check the actual relation between the interpolation-method here and the Euler-series, but I note that the first coefficient -1.60669... occurs also in that article. For the Euler-paper see: Eneström-index E190. "Consideratio quarumdam serierum quae singularibus proprietatibus sunt praeditae" (“Consideration of some series which are distinguished by special properties”). Gottfried Helms, Kassel bo198214 Administrator Posts: 1,391 Threads: 90 Joined: Aug 2007 09/16/2009, 11:32 AM The interpolation you describe is what Ansus also uses as Newton/Lagrange interpolation. (The interpolation polynomial is uniquely determined whether you use Newton or Lagrange formula or calculate it directly as solving the matrix equation like you do.) I showed that this is interpolation (which only works for b<=e^(1/e)) is equivalent to the regular iteration at the lower (attracting) fixed point. In so far the interpolation is "correct". There are two problems that occur in your description with the logarithm. 1. the logarithm has no powerseries at 0 (perhaps better try to interpolate log(x+1)) 2. the interpolation method seems only to converge if tends to a limit. Which the logarithm does not satisfy. Gottfried Ultimate Fellow     Posts: 787 Threads: 121 Joined: Aug 2007 09/16/2009, 01:59 PM (This post was last modified: 09/16/2009, 02:10 PM by Gottfried.) (09/16/2009, 11:32 AM)bo198214 Wrote: The interpolation you describe is what Ansus also uses as Newton/Lagrange interpolation. (The interpolation polynomial is uniquely determined whether you use Newton or Lagrange formula or calculate it directly as solving the matrix equation like you do.) I showed that this is interpolation (which only works for b<=e^(1/e)) is equivalent to the regular iteration at the lower (attracting) fixed point. yes, and the wider our "unification-" project reaches, the more methods will be adressed by this consideration... Quote:In so far the interpolation is "correct". Yes, may be my wording is not explanative enough. I didn't want to say it were not correct in so far - the "false" logarithm series is also "so far correct" in its own set of conditions/requirements, it is completely legitimate in that setting: just in terms of an interpolation of some function in x. (we had this argument often: interpolation is only unique modulo some 1-periodic function) But only in the wider view that set of conditions/requirements is not useful for the original intention, which was laid on the conceptually wider idea of a *logarithm*, namely to make log(a*b) = log(a) + log(b) and something more. As we get by regular iteration some correct interpolated results, they satisfy many conditions correctly - but as I also understood our search for uniqueness, there is something more, for which the described method of interpolation is not sufficient. (I hoped Dmitriis integral approach would answer to such wider conditions, perhaps you both get it working soon) Quote:There are two problems that occur in your description with the logarithm. 1. the logarithm has no powerseries at 0 (perhaps better try to interpolate log(x+1)) Ok, I also thought about this; I'll try this later Quote:2. the interpolation method seems only to converge if tends to a limit. Which the logarithm does not satisfy. Hmmm. How far can this argument be related to that interpolation-method in more generality? understand me correctly: I'm not on a search for a new interpolation-method for the logarithm, but I want to make things explicite down to the last "quark" . Then i am hoping to find a key, how such a method could be adapted/corrected/transformed to lead to the "correct" result ("correct" here in the sense for the wider scene) for instance in the easier example (with logarithm) and then see, what would such a correction/ transformation do with our regular iteration. Hmm. I hope I did not create more obfuscation than clearing... [update] ... and that I'm not only on some spleen idea... [/update] Gottfried Gottfried Helms, Kassel Gottfried Ultimate Fellow     Posts: 787 Threads: 121 Joined: Aug 2007 09/16/2009, 02:28 PM (This post was last modified: 09/16/2009, 02:30 PM by Gottfried.) (09/16/2009, 11:32 AM)bo198214 Wrote: There are two problems that occur in your description with the logarithm. 1. the logarithm has no powerseries at 0 (perhaps better try to interpolate log(x+1)) I've got Code:f_32(x) = 1.26135017559*x - 0.300463998555*x^2 + 0.0419013176151*x^3 - 0.00288201496505*x^4 + 0.0000960702658021*x^5 + O(x^6)`Don't see anything in the structure of the coefficients (rational numbers, multiples of log(2) and the like) Gottfried Gottfried Helms, Kassel bo198214 Administrator Posts: 1,391 Threads: 90 Joined: Aug 2007 09/17/2009, 08:17 AM (This post was last modified: 09/17/2009, 08:22 AM by bo198214.) (09/16/2009, 01:59 PM)Gottfried Wrote: Quote:In so far the interpolation is "correct". Yes, may be my wording is not explanative enough. I didn't want to say it were not correct in so far - the "false" logarithm series is also "so far correct" in its own set of conditions/requirements, it is completely legitimate in that setting: just in terms of an interpolation of some function in x. (we had this argument often: interpolation is only unique modulo some 1-periodic function) But only in the wider view that set of conditions/requirements is not useful for the original intention, which was laid on the conceptually wider idea of a *logarithm*, namely to make log(a*b) = log(a) + log(b) and something more. With "correct" I meant here satisfaction of the superfunction equation: . Indeed it is a surprise as I wouldnt expect satisfaction of this equation from some interpolation. But of course it does not suffice for uniqueness, while your logarithmic equation does. Quote:Quote:2. the interpolation method seems only to converge if tends to a limit. Which the logarithm does not satisfy. Hmmm. How far can this argument be related to that interpolation-method in more generality? My understanding is the following: You know there is a theorem that an analytic function is uniquely determined by the values of infinitely many arguments that have an accumulation point in the domain of holomorphy. In our case of having a limit, i.e. , I guess the regular super-exponential is analytic at . So under this condition the sequence of natural numbers as arguments has the limit point inside the domain of holomorphism, and hence is uniquely determined by the values on the natural numbers. The difficult case is where it is not analytic at infinity. However all methods seem to converge to the same thing in this case. These method are all non-interpolative. Here it seems that approaching infinity in a certain angle/sector is bounded. Kinda analyticity not in a whole vicinity of infinity but only when approaching through this sector. « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post My interpolation method  tommy1729 1 2,365 02/20/2020, 08:40 PM Last Post: tommy1729 Tribonacci interpolation ? tommy1729 0 3,151 09/08/2014, 10:37 AM Last Post: tommy1729 [Update] Comparision of 5 methods of interpolation to continuous tetration Gottfried 30 49,461 02/04/2014, 12:31 AM Last Post: Gottfried True or False Logarithm bo198214 4 13,305 04/25/2012, 09:37 PM Last Post: andydude Self tetraroot constructed via Newton series interpolation mike3 2 9,025 07/11/2010, 03:38 AM Last Post: mike3 exponential polynomial interpolation Gottfried 3 9,912 07/16/2008, 10:32 PM Last Post: andydude A related discussion on interpolation: factorial and gamma-function Gottfried 6 15,270 06/27/2008, 06:38 PM Last Post: Gottfried polynomial interpolation to fractional iteration Gottfried 3 11,183 12/23/2007, 03:40 PM Last Post: Gottfried

Users browsing this thread: 1 Guest(s)