• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 Fractional iteration of x^2+1 at infinity and fractional iteration of exp bo198214 Administrator Posts: 1,389 Threads: 90 Joined: Aug 2007 06/08/2011, 09:55 AM (This post was last modified: 06/08/2011, 06:57 PM by bo198214.) Hey guys, a polynomial pendant of moving from base e over eta to sqrt(2) may be moving from x^2+1 to x^2 to x^2-1, i.e. from no fixpoint to one fixpoint to two fixpoints on the real axis. Now for polynomials there is a technique to iterate them, that is not applicable to exp(x), which I will describe here applied to the example f(x)=x^2+1. It is the iteration at infinity. To see whats happening there one moves the fixpoint at infinity to 0, by conjugating with 1/x: can be developed into a powerseries at 0, knowing that: has a so called super-attracting fixpoint at 0, which means that . In this case one can solve the Böttcher equation: (where 2 is the first power with non-zero coefficient in the powerseries of ). Iterating can then be done similarly to the Schröder iteration: . Again similar to the Schröder case, we have an alternative expression: If we even roll back our conjugation with we get: . Numerically this also looks very convincing: The following is the half-iterate h of x^2+1 accompanied by the identity function and x^2+1 itself. Computed with n=9. And the verification h(h(x))-(x^2+1) This may lead into a new way of computing fractional iterates of exp, because we just approximate exp(x) with polynomials and approximate the half-iterate of exp with the half-iterate of these polynomials. Gottfried Ultimate Fellow     Posts: 767 Threads: 119 Joined: Aug 2007 06/08/2011, 01:09 PM (This post was last modified: 06/08/2011, 01:10 PM by Gottfried.) (06/08/2011, 09:55 AM)bo198214 Wrote: This may lead into a new way of computing fractional iterates of exp, because we just approximate exp(x) with polynomials and approximate the half-iterate of exp with the half-iterate of these polynomials. As I read somewhere, best mathematical solutions are the simple, elegant ones. It would be great if this proves to be such an approach... This'll give me stuff to chew on for the weekend :-) Gottfried Gottfried Helms, Kassel tommy1729 Ultimate Fellow     Posts: 1,370 Threads: 335 Joined: Feb 2009 06/08/2011, 01:18 PM (06/08/2011, 09:55 AM)bo198214 Wrote: It is the iteration at infinity. To see whats happening there one moves the fixpoint at infinity to 0, by conjugating with 1/x: in thread tid 403 I , Bo , Mike and Ben already discussed moving fixpoints in particular - as the title says - f(f(x)) = exp(x) + x. this thread seems similar. however , i consider this slightly different. exp(x) + x has a " true " fixpoint at oo. with " true " i mean that it touches the id(x) line. in this case of x^2 + 1 , i would not call oo a fixpoint , but rather say that Bo uses so-called " linearization " : g^[-1](f^[n](g(x))) = [g^[-1](f(g(x))]^[n] Quote:. you take lim n -> oo ... and then i see no n. so this needs a correction. Quote:Again similar to the Schröder case, we have an alternative expression: If we even roll back our conjugation with we get: . i dont get how you arrive at this ... g and f on both sides ? Quote:This may lead into a new way of computing fractional iterates of exp, because we just approximate exp(x) with polynomials and approximate the half-iterate of exp with the half-iterate of these polynomials. hmm ... i wonder if (x^2 + 1)^[h] is analytic at x = 0 for small h. the reason is that lim h-> 0 (x^2 + 1)^[h] = abs(x) and abs(x) is not analytic at 0. im not sure about being analytic elsewhere either , though maybe levy böttcher schröder imply so ( i still do not know enough about them ). also approximating exp with polynomials might be troublesome ; does the n'th approximation of the half-iterate converge when n -> oo ? do we really get an analytic function at n = oo ? as a bad example that does not make sense in the case of exp(x) but gives an idea of what i mean ( i dont have a good one for the moment ) for instance if an n'th polynomial satisfies f(x) = f(-x) we are in trouble. polynomials also have zero's and those zero's will need to drift towards oo fast if the sequence of polynomials wants to approximate exp(x) well. so i suggest working with the (n^2)'th polynomial approximations , rather then compare the n'th with the (n+1)'th. despite much comment , dont get me wrong , i like this idea. regards tommy1729 JmsNxn Long Time Fellow    Posts: 291 Threads: 67 Joined: Dec 2010 06/08/2011, 07:26 PM To be honest, this sounds like a very tangible approach to solving for the half-iterate of , one that I can actually understand straight-up (even though I have a poor understanding of how Schroder's method works, I still understand that it works ). I wonder if this method agrees with any other methods of extending tetration? Have you been able to compute anything yet involving exp, or is it all still theoretical? I'm very interested in seeing how this evolves. bo198214 Administrator Posts: 1,389 Threads: 90 Joined: Aug 2007 06/08/2011, 07:59 PM (06/08/2011, 01:18 PM)tommy1729 Wrote: in thread tid 403 I , I consider it as standard on the forum to *link* to posts/threads. Isnt your reader worth this little extra effort? Quote:Bo , Mike and Ben already discussed moving fixpoints in particular - as the title says - f(f(x)) = exp(x) + x. this thread seems similar. So? You mean I violated copyright law?! I think there is enough original never here discussed stuff in my post. Quote:however , i consider this slightly different. exp(x) + x has a " true " fixpoint at oo. with " true " i mean that it touches the id(x) line. in this case of x^2 + 1 , i would not call oo a fixpoint , but rather say that Bo uses so-called " linearization " : In standard literature points z, which satisfy f(z)=z are called fixpoint, so do i here f(oo)=oo. Via the fixpoint exchange 0<->oo you can watch this fixpoint also as the real point 0. Quote:g^[-1](f^[n](g(x))) = [g^[-1](f(g(x))]^[n] Dunno what that has todo with "fixpoint", nor what it has to do with "linearity". Quote:Quote:. you take lim n -> oo ... and then i see no n. so this needs a correction. Ya I corrected that. Quote:Quote:Again similar to the Schröder case, we have an alternative expression: If we even roll back our conjugation with we get: . i dont get how you arrive at this ... g and f on both sides ? Sure, we are about to determine the fractional iterates , but we know the function f and its integer iterates. Note, that the Kuczma convention is: If you apply a power to a function, they mean the iterate of the function, e.g. f^n, if however they apply a power to a number then they mean the multiplicative power, e.g. f(x)^n. Quote:Quote:This may lead into a new way of computing fractional iterates of exp, because we just approximate exp(x) with polynomials and approximate the half-iterate of exp with the half-iterate of these polynomials. hmm ... i wonder if (x^2 + 1)^[h] is analytic at x = 0 for small h. Yes, exactly, thats a topic and may shine also some light on the by you defined iterates of exp with help of the iterates of sinh. It also uses the fixpoint at infinity. If not analytic at 0 then I guess its at least meromorphic everywhere else on the complex sphere, because the Böttcher iteration yields analytic functions in the vicinity of the fixpoint. Quote:the reason is that lim h-> 0 (x^2 + 1)^[h] = abs(x) and abs(x) is not analytic at 0. why is that so? Oh you mean you verified numerically, hm interesting observation. Quote:also approximating exp with polynomials might be troublesome ; does the n'th approximation of the half-iterate converge when n -> oo ? do we really get an analytic function at n = oo ? That needs to be investigated Its also possible to terribly fail! Was just an idea at the end of explaining a new method to iterate polynomials. Quote:polynomials also have zero's and those zero's will need to drift towards oo fast if the sequence of polynomials wants to approximate exp(x) well. You mean the fixpoint of the odd degree polynomial will need to drift quickly towards -oo? Ya perhaps, the Böttcher iterate is quite probably not analytic at the fixpoint. But in how far that invalidates the approximation, I have no idea and no numerical experiments made yet. mike3 Long Time Fellow    Posts: 368 Threads: 44 Joined: Sep 2009 06/08/2011, 09:31 PM (This post was last modified: 06/08/2011, 10:06 PM by mike3.) (06/08/2011, 09:55 AM)bo198214 Wrote: Again similar to the Schröder case, we have an alternative expression: If we even roll back our conjugation with we get: . Numerically this also looks very convincing: Something seems really wrong. These limit formulas are giving me what appears to be abs(x), regardless of the value of t I use. bo198214 Administrator Posts: 1,389 Threads: 90 Joined: Aug 2007 06/08/2011, 09:48 PM (06/08/2011, 09:31 PM)mike3 Wrote: Something seems really wrong. These limit formulas are giving me what appears to be abs(x), regardless of the value of t I use.Regardless of the value of t you use??? It looks the same for all t? Could you reproduce my picture for t=1/2? I can reproduce Tommy's assertion that for t->0 it tends to abs. Well for t->0 the exponent 2^t->1, we have then . The effect might be quite similar to except you use the right branch of the inverse. mike3 Long Time Fellow    Posts: 368 Threads: 44 Joined: Sep 2009 06/08/2011, 10:08 PM (This post was last modified: 06/08/2011, 10:09 PM by mike3.) (06/08/2011, 09:48 PM)bo198214 Wrote: (06/08/2011, 09:31 PM)mike3 Wrote: Something seems really wrong. These limit formulas are giving me what appears to be abs(x), regardless of the value of t I use.Regardless of the value of t you use??? It looks the same for all t? Could you reproduce my picture for t=1/2? Nope. Setting t = 1/2 and using then taking your formula for yields what appears to be as . The g formula doesn't work either. bo198214 Administrator Posts: 1,389 Threads: 90 Joined: Aug 2007 06/08/2011, 10:12 PM (06/08/2011, 10:08 PM)mike3 Wrote: then taking your formula for Its a power, not multiplication: f^{-n}(f^n(x)^{\sqrt{2}}) mike3 Long Time Fellow    Posts: 368 Threads: 44 Joined: Sep 2009 06/08/2011, 10:58 PM (This post was last modified: 06/08/2011, 11:03 PM by mike3.) (06/08/2011, 10:12 PM)bo198214 Wrote: (06/08/2011, 10:08 PM)mike3 Wrote: then taking your formula for Its a power, not multiplication: f^{-n}(f^n(x)^{\sqrt{2}}) Oh, well, duh, thanks. Now it works better Guess I just didn't notice the superscripting. Anyway, I think this is not analytic at 0. The iterates of g so formed have a branch point at 0, and also a complementary one at infinity (note that if there is a BP at 0, there must be one at inf, since "circling about inf" is equivalent to circling about 0). The conjugate simply exchanges these two branch points. This would explain how it can approach as . « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post Inverse Iteration Xorter 3 3,168 02/05/2019, 09:58 AM Last Post: MrFrety Math overflow question on fractional exponential iterations sheldonison 4 4,631 04/01/2018, 03:09 AM Last Post: JmsNxn Iteration exercises: f(x)=x^2 - 0.5 ; Fixpoint-irritation... Gottfried 23 33,797 10/20/2017, 08:32 PM Last Post: Gottfried Operational iteration Xorter 2 3,316 07/27/2017, 12:24 AM Last Post: tommy1729 Half-iteration of x^(n^2) + 1 tommy1729 3 4,735 03/09/2017, 10:02 PM Last Post: Xorter Iteration basics Ivars 27 28,619 01/02/2017, 05:21 PM Last Post: Xorter Complaining about MSE ; attitude against tetration and iteration series ! tommy1729 0 1,889 12/26/2016, 03:01 AM Last Post: tommy1729 2 fixpoints , 1 period --> method of iteration series tommy1729 0 1,934 12/21/2016, 01:27 PM Last Post: tommy1729 Finding continu iteration cycles. tommy1729 2 2,868 03/05/2016, 10:12 PM Last Post: tommy1729 [AIS] (alternating) Iteration series: Half-iterate using the AIS? Gottfried 33 43,386 03/27/2015, 11:28 PM Last Post: tommy1729

Users browsing this thread: 1 Guest(s) 