Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
[exercise] fractional iteration of f(z)= 2*sinh (log(z)) ?
#1
Hi,     
 in a thread in math.stackexchange.com (limit of a recursive function) I came over the question of what could be a closed form for   


 I fiddled a bit with it and its reverse operation, finding interesting properties, for instance the existence of periodic points of any order, after another contributor showed that the only 1-periodic point (=fixpoint) would be infinity. (see https://math.stackexchange.com/a/4056192/1714)   

To extend my knowledge about this sequence/function beyond that MSE-discussion I pondered the possibility of fractional iteration (or as one might say: indefinite summation) but couldn't find a promising ansatz to establish such a routine. However, further thinking showed, that the recursive expression could as well be as iteration of the function g(z)= 2*sinh (log(z)) , and since we had discussions here about iteration of 2*sinh(z) there might as well be an idea for the fractional iteration of the form g(z).        

Someone out here with an idea? (Feel free to contribute to the thread in MSE)

Gottfried
Gottfried Helms, Kassel
Reply
#2
(03/12/2021, 12:53 PM)Gottfried Wrote: Hi,     
 in a thread in math.stackexchange.com (limit of a recursive function) I came over the question of what could be a closed form for   


 I fiddled a bit with it and its reverse operation, finding interesting properties, for instance the existence of periodic points of any order, after another contributor showed that the only 1-periodic point (=fixpoint) would be infinity. (see https://math.stackexchange.com/a/4056192/1714)   

To extend my knowledge about this sequence/function beyond that MSE-discussion I pondered the possibility of fractional iteration (or as one might say: indefinite summation) but couldn't find a promising ansatz to establish such a routine. However, further thinking showed, that the recursive expression could as well be as iteration of the function g(z)= 2*sinh (log(z)) , and since we had discussions here about iteration of 2*sinh(z) there might as well be an idea for the fractional iteration of the form g(z).        

Someone out here with an idea? (Feel free to contribute to the thread in MSE)

Gottfried

Convert the fixed point at infinity to a fixed point at zero via the conjugation

So that we have a new sequence,



Call,



So we have a neutral fixed point at zero. At this point, the very stable route would be to produce an Abel function using Ecalle's method, which will converge not in a neighborhood of zero but on a petal by zero. This will give a fractional iteration, and furthermore; one which is unique. Then conjugate back to get the original sequence.

Although I haven't published anything on this, there are situations where you can use a different method. It's difficult when the fixed point is neutral, but seems to work in the cases I've tried (for instance for on the real line in a neighborhood of (real neighborhood)).

That would be to use Ramanujan's master theorem. We write this as,



This form has the added benefit of being the exact mechanism I use to solve the indefinite sum. Now I'd tread carefully here because the fixed point is neutral--I'm only able to do this absolutely rigorously when ; and in some cases have managed to use a sequence of functions with geometric fixed points, to converge towards a neutral solution (as I managed to do with which largely convinced me of the result). Quite frankly I would be much more confident in this expression if we took a and instead talked about the iteration,



Whereby we can fractionally iterate this using the above method--but taking the limit proves to be tricky, especially when we talk about where lives (luckily for the function we know the real-line will be invariant under this process so just stick to the real line). In the general case it's necessary we discuss the deformation of the basins of attraction as we take ; and even more troubling, we can't simply restrict ourselves to a neighborhood of the fixed point, because necessarily it will diverge for some points in that neighborhood (the nature of neutral fixed points, they're not attractive in a neighborhood).

This method is also the basis for constructing bounded tetration for ; where at we do much the same thing, iterate at a neutral fixed point using Ramanujan's Master Theorem by approximating uniformly using geometric fixed points . Though again, this is a special case because the exponential map is just so pretty (as is the function).

All in all, I would say the safest bet is to use Ecalle's method of constructing an Abel function on a petal by the fixed point--but the Ramanujan method does work in certain instances; just be careful when discussing domains.
Reply
#3
Ha, interestingly enough, Milnor has a problem in his book on complex dynamics about this very problem; in the section dedicated to parabolic fixed points and Abel functions. Although he doesn't provide a proof, he asks that one prove that,



has a neutral fixed point at and to prove the Julia set is the Real-line including infinity, and the parabolic basins are the upper half plane and the lower half plane. This would imply we can define two Abel functions holomorphic on and (respectively), in which,



As, as in either half-plane we know that an inverse exists in a neighborhood of infinity, call it , and we can define,



which is holomorphic for (choosing appropriately) and at least. We can expect that this is asymptotic to as and therefore,



Converges for and . Which would imply the Ramanujan method does work in this case (though I'm not sure how to specify or , but it shouldn't be hard).

Edit:

We could also modify the domains (which I guess is practical) to and and make (I simply chose to amalgamate and into their maximum). That would work too, and is a tad more general. Recalling we are thinking of the point at infinity in the Riemann Sphere sense. I'm curious to wonder if we could pull this back to its maximal domain--I'm not sure what it would be. I'm guessing it may be something of the form and where is either a half-plane (where the Ramanujan theorem would hold) or minus various branch-cuts including a halfplane (though Ramanujan's form will not converge for all of , it necessarily can only converge in a half-plane by the nature of the Mellin Transform).
Reply
#4
When I came up with the 2sinh method I thought about similar things.

But I believe all iterations of type f(v,z)= 2*sinh^[v] (log^[v](z)) for nonzero v are problematic.

For all positive integer v this seems to be the case anyway. And noninteger seems even worse at first.

Also for v = 1 this is the only rational iteration.

These are still very different from ln^[v](2sinh^[t](exp^[v](z))) or log^[v+a+b](2sinh^[v+c](z)) so not like the 2sinh method or similar by far.

With " problematic " I mean things like divergeance , not analytic and similar problems.

however iterations of g(v,z)= 2*sinh^[v] (h^[v](z)) or the function g(v,z) itself may be interesting.

For instance if h is the logarithm base eta and z is large. That might related to the base change method.

Im not saying that all works fine and easy. 

Just a little comment Smile

regards

tommy1729
Reply
#5
(03/14/2021, 05:00 PM)tommy1729 Wrote: When I came up with the 2sinh method I thought about similar things.

But I believe all iterations of type f(v,z)= 2*sinh^[v] (log^[v](z)) for nonzero v are problematic.

For all positive integer v this seems to be the case anyway. And noninteger seems even worse at first.

Also for v = 1 this is the only rational iteration.

These are still very different from ln^[v](2sinh^[t](exp^[v](z))) or log^[v+a+b](2sinh^[v+c](z)) so not like the 2sinh method or similar by far.

With " problematic " I mean things like divergeance , not analytic and similar problems.

however iterations of g(v,z)= 2*sinh^[v] (h^[v](z)) or the function g(v,z) itself may be interesting.

For instance if h is the logarithm base eta and z is large. That might related to the base change method.

Im not saying that all works fine and easy. 

Just a little comment Smile

regards

tommy1729

We could for instance achieve a " hyperbolic base change constant ". If that almost equals the "normal base change constant " then this might be used to show that there is a "problem" ?!

Where problem could be many things like " ill defined " , " just an approximation " or not analytic.

I have not investigated hyperbolic base change constants.

Just one of the possible directions ...

regards

tommy1729
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  [Exercise] A deal of Uniqueness-critrion:Gamma-functionas iteration Gottfried 6 4,325 03/19/2021, 01:25 PM
Last Post: tommy1729
  Math overflow question on fractional exponential iterations sheldonison 4 7,706 04/01/2018, 03:09 AM
Last Post: JmsNxn
  using sinh(x) ? tommy1729 99 186,297 03/12/2016, 01:20 PM
Last Post: tommy1729
  exp^[3/2](x) > sinh^[1/2](exp(x)) ? tommy1729 7 10,564 10/26/2015, 01:07 AM
Last Post: tommy1729
  [MSE] Fixed point and fractional iteration of a map MphLee 0 3,299 01/08/2015, 03:02 PM
Last Post: MphLee
  Fractional calculus and tetration JmsNxn 5 11,240 11/20/2014, 11:16 PM
Last Post: JmsNxn
  A limit exercise with Ei and slog. tommy1729 0 2,977 09/09/2014, 08:00 PM
Last Post: tommy1729
  Theorem in fractional calculus needed for hyperoperators JmsNxn 5 10,294 07/07/2014, 06:47 PM
Last Post: MphLee
  Further observations on fractional calc solution to tetration JmsNxn 13 21,468 06/05/2014, 08:54 PM
Last Post: tommy1729
  Negative, Fractional, and Complex Hyperoperations KingDevyn 2 9,120 05/30/2014, 08:19 AM
Last Post: MphLee



Users browsing this thread: 1 Guest(s)