Thread Rating:
  • 2 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
using sinh(x) ?
I shared a related idea :

https://math.eretrandre.org/tetrationfor...p?tid=1356

regards

tommy1729
Reply
see post 8 and 9 here :

https://math.eretrandre.org/tetrationfor...1#pid10731

( copied )

f^[s+t](z) = f^[s](f^[t](z)) = f^[t](f^[s](z))

Actually Paul Lévy [1] showed how to obtain an iteration of e^x if we have an iteration of e^x-1.
Say is an Abel function of e^x-1, then



is an Abel function of . This should also work for beta being the Abel function of .

This approach is actually equivalent to the "change of base" approach we considered here on the forum, also Walker [2] used a similar method. (But I am in the moment to lazy to detail how exactly they imply each other.) It is still open whether it is analytic, but it is proven to be infinitely differentiable in [2].

[1] Lévy, P. (1927). Sur l'itération de la fonction exponentielle. C. R., 184, 500–502.
[2] Walker, P. L. (1991). Infinitely differentiable generalized logarithmic and exponential functions. Math. Comput., 57(196), 723–733.

Ok, first let us verify that it is indeed an iteration of exp, i.e that indeed satisfies:
and .

neglecting some rules of properly evaluating limits Wink we get
.

and
because towards infinity gets arbitrarily close to .

Basically thats the iteration equivalent of the Abel function Lévy proposes:

where is the Abel function of (or in Lévy's case ).

The superfunction is then (the inverse of ):



which is the same as Tommy's superfunction.

***

tommy1729
JmsNxn
So, this is actually a defining property of the standard Schroder iteration. But it's a little difficult to fully flesh out why.  Now, to begin I'll construct an arbitrary iteration which has a constant noodle, and show there are many of them.

If you iterate locally about a fixed point, and your solution satisfies \(f^t(p) = p\), then the iteration is expressible via Shroder iteration (provided that \(|f'(p)| \neq 0,1\)).  For convenience, assume that \(|f'(p)| < 1\).

You can actually prove this pretty fast. Assume that \(f^t(x)\) is a super function in \(t\) about a fixed point \(p\), and \(x\) is in the neighborhood of \(p\). Assume that \(f^t(p) = p\).  Well then:

$$
\Psi(f^{t+1}(x)) = \lambda \Psi(f^t(x))\\
$$

So that:

$$
\theta(t) = \frac{\Psi(f^t(x))}{\lambda^t}\\
$$

And we know that there must be some 1-periodic function \(\theta(t)\), such that:

$$
f^t(x) = \Psi^{-1}\left(\lambda^{t}\theta(t) \Psi(x)\right)\\
$$

In fact, any periodic function will work fine here, and will have a constant noodle, will be a super function, but will not be Schroder iteration.

Now let's add one more constraint, let's say that \(f^{t}(f^{s}(x)) = f^{t+s}(x)\). Well then, we have that \(\theta\) must be constant. By which, we are guaranteed that it's a Schroder iteration...


So to clarify. Any fractional iteration with a constant noodle is a Schroder iteration. But there are plenty of superfunctions of \(f\) which have a constant noodle, but in turn, they aren't fractional iterations then (don't satisfy the semi-group law).


This is not acceptable for me.
Not formal , detailed and general enough.

It is actually quite simple

even without fixpoints.

\(f^{t}(f^{s}(x)) = f^{t+s}(x)\).

let theta(v) be a one periodic function.

so do we get

\(f^{t+theta(t)}(f^{s+theta(s)}(x)) = f^{t+s+theta(s+t)}(x)\).

well that would require

theta(v+1) = theta(v)


and

theta(t + s) = theta(t) + theta(s)

so theta must be constant.

keywords :  cauchy functional equation , axiom of choice , linear function , non-constructive , non-periodic.


So we get a (local ?) uniqueness criterion.

This has many consequences.

(for instance probably if you use iterations of another function to get to iterations of yours , it is required that the other function has the  semi-group property.)

So we have uniqueness up to convergeance speed ofcourse.

This should be on page 1 of any dynamics book !


regards

tommy1729

So we want f^[s+t](z) = f^[s](f^[t](z)) = f^[t](f^[s](z))

2sinh^[s+t](x) has this property for real x. Or 2sinh^[s+t](z) has this property for complex z around the real axis. 

That is if we use the koenings function around the fixpoint 0 of sinh for the construction.

It is easy to show that ln ( 2sinh^[s+t](exp(x)) ) has the same property.

and by induction ln^[n] ( 2sinh^[s+t](exp^[n](x)) ) for any positive integer n also has the property.

notice ln^[n] ( 2sinh^[s+t](exp^[n](x)) ) is also analytic on the real line.

Letting n go to +oo gives the solution on the real line :

exp^[s+t](x) = lim_n  ln^[n] ( 2sinh^[s+t](exp^[n](x)) )

and this limit has the same property !!!


see also :

Paul Lévy [1] showed how to obtain an iteration of e^x if we have an iteration of e^x-1.
Say is an Abel function of e^x-1, then



is an Abel function of . This should also work for beta being the Abel function of .

This approach is actually equivalent to the "change of base" approach we considered here on the forum, also Walker [2] used a similar method. (But I am in the moment to lazy to detail how exactly they imply each other.) It is still open whether it is analytic, but it is proven to be infinitely differentiable in [2].

[1] Lévy, P. (1927). Sur l'itération de la fonction exponentielle. C. R., 184, 500–502.
[2] Walker, P. L. (1991). Infinitely differentiable generalized logarithmic and exponential functions. Math. Comput., 57(196), 723–733.


let us verify that it is indeed an iteration of exp, i.e that indeed satisfies:
and .

we get
.

and
because towards infinity gets arbitrarily close to .

Basically thats the iteration equivalent of the Abel function Lévy proposes:

where is the Abel function of (or in Lévy's case ).

The superfunction is then (the inverse of ):



which is the same as Tommy's superfunction.


So my 2sinh solution has this uniqueness criterion making it quite special.

Other methods i proposed that are similar * real entire functions with unique real fixpoint at 0 with real derivative larger than 1 , fast asymptotic to exp(x) and some minor details * also carry this property ....

SO they are unique !!

This also explains that if using others than 2sinh(x) , it must give the same function. THEREFORE the nonreal fixpoints of 2sinh are not really an issue.

Also the 2sinh(x) is easy to handle since its maclauren series has all nonnegative derivates.


***

even more interesting is that the 2sinh method can be extended to bases > exp(1/2).

And probably by analytic continuation to all bases larger than eta ( e^(1/e) ).

but for bases =< exp(1/2) we get issues with multiple fixpoints or derivatives at the fixpoints.

THAT is also the reason why I proposed alternative similar methods.

Since those alternatives agree on bases larger than exp(1/2) by the uniqueness criterion , they must be the 2sinh method extended to lower bases !!

I hope that is clear to everyone.


Regards

tommy1729

ps wiki will not accept the 2sinh method in the tetration section , even when it does mention non-C^oo solutions.
meh !
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  [exercise] fractional iteration of f(z)= 2*sinh (log(z)) ? Gottfried 4 2,744 03/14/2021, 05:32 PM
Last Post: tommy1729
  exp^[3/2](x) > sinh^[1/2](exp(x)) ? tommy1729 7 13,659 10/26/2015, 01:07 AM
Last Post: tommy1729
  2*sinh(3^h*asinh(x/2)) is the superfunction of (...) ? Gottfried 5 11,547 09/11/2013, 08:32 PM
Last Post: Gottfried
  zeta and sinh tommy1729 0 3,808 05/30/2011, 12:07 PM
Last Post: tommy1729



Users browsing this thread: 2 Guest(s)