The nature of g(exp(f(s))
#1
Let be one of those recent compositional asymtotics of tetration.

Let  be its functional inverse.

Now consider the imho interesting equation :



We know that  must be close to the successor function  for large real .

We have that .

I feel like studying this is an important and logical step.

Especially for nonreal s or s being small.

One of the proposed solutions was/is then :



or lim n to oo : 



( for some fixed k , using appropriate ln branches )

Both compute the same function or should (?!)...
But with different practical considerations.

Error terms such as O(exp(-s)) would be usefull too ofcourse.

However do not forget possible singularities of   making things harder or properties only locally.

Regards

tommy1729
Reply
#2
Interesting Tommy. I'd like to add some points.

If we consider then,



So we should expect,




The value,



So, we can shorten this to,



This seems like a very good way to get the super-logarithm. That's something that's been bugging me a bit, finding a way to limit towards the super-logarithm. Which is to say, this could definitely aid in showing the limit,



Where is the tetration function associated to the multiplier . Even though the function is an absolute monster I can't imagine numerically evaluating.

But using this idea of the function, we get that,



And this should be bounded by a nice sum of exponentials which should converge as ... I believe...

Regards, James


Edit:

Please, Tommy, take a look at my Pari-GP code. This isn't a proposed, maybe it works, solution. I can produce accuracy of 200 decimal places.

The function pretty much produces ; minus some caveats. This function is absolutely holomorphic. The trouble I'm having with is only the real tetration which is the real tetration we care about. But is absolutely holomorphic.
Reply
#3
I have to add this.

Consider h(s).

maybe h(s) behaves alot like s + 1/(exp(-s) + 1) for modest positive s.

But more interesting what if we take h(s) to behave like s + 1 + 1/f(s) !?

This gives us for some real a,b the following equation :

f( s + 1 + a/f(s) + b/f(s)^2 ) = exp(f(s)).

And maybe this f is of the type infinite composition and

f(s+1) = exp(f(s)) * t(s).

with t(s) going to 1 for large positive real part s.

This seems to make sense.

So can we find such f(s) or t(s) ??

regards

tommy1729
Reply
#4
Also i want to point out for iterations of exp ;

In the neighbourhood of a periodic point with period A there are points that are not periodic with A and tend to go to oo chaotically.

However I believe it is true that for nonreal s :

In the infinitesimal neigbourhood of any nonreal s there is a point with some period B. (where B is not the period of s if s is periodic).

that might be an issue right ??

regards

tommy1729
Reply
#5
(05/12/2021, 12:28 PM)tommy1729 Wrote: Also i want to point out for iterations of exp ;
(...)
However I believe it is true that for nonreal s :

In the infinitesimal neigbourhood of any nonreal s there is a point with some period B. (where B is not the period of s if s is periodic).
(...)

Hmm, it seems that the n-periodic points are very "dense" (in  the visual sense) in the right halfplane, but I don't know about the left half plane. At least I didn't come across of 2-,3- or 4-periodic points (for the exp()-function to base e) in the left halfplane (don't have proof -positive or negative- so far)
Gottfried Helms, Kassel
Reply
#6
(05/12/2021, 02:07 PM)Gottfried Wrote:
(05/12/2021, 12:28 PM)tommy1729 Wrote: Also i want to point out for iterations of exp ;
(...)
However I believe it is true that for nonreal s :

In the infinitesimal neigbourhood of any nonreal s there is a point with some period B. (where B is not the period of s if s is periodic).
(...)

Hmm, it seems that the n-periodic points are very "dense" (in  the visual sense) in the right halfplane, but I don't know about the left half plane. At least I didn't come across of 2-,3- or 4-periodic points (for the exp()-function to base e) in the left halfplane (don't have proof -positive or negative- so far)

Hey, Gottfried.


You will not get periodic points in the left half plane, because necessarily they would be attracting, and has no attracting periodic points. Suppose that,



Then,



And



The function



has no attracting fixed points. So this is impossible.

The only way you can have a cycle in the left half-plane is if some elements of the cycle are in the right half plane. This won't really happen though, and if it does, they'll be far less numerous than periodic points in the right half plane. Which yes, look almost dense. They aren't though. They're actually a Lebesgue measure zero set. Quite the opposite of dense. But they are kind of numerous.

Regards, James
Reply
#7
I hesitated to post this because its another set of crazy ideas.
But it might inspire you.

I talked about getting h(s) from f(s).
But ofcourse similar ideas are getting an f(s) from h(s).

Another idea is getting h from f ( or f from h ) also in infinite functional composition.
But how ?

Keeping these in mind.

A more logical estimate for h(s) for RE(s) < 1 might be 

h(s) = T(s)

where

T(s) = (s+1) / (exp(-s) + 1)

Remember that f(s) usually has the property that for Re(s) going to - oo , f(s) is going to zero.
Hence f(h(s)) should be getting close to f(0).

T(s) more or less does that.

Another idea is that h(s) is actually the superfunction of T(s) !!
that might be a better estimate also for Re(s) being a large positive real.

In BOTH cases we wonder about iterations of T(s) or equivalently its super.
And the iterations of the super of T(s) OR  the super of the super of T(s).

So where to start ?

Well the most logical would be the primary fixpoints of T(s).

But wait a minute.

T(s) = s has the same solution as exp(s) = s !!

I notice that not much has been said about what happens to the fixpoints of exp for f,g,h.
( for instance at + oo i ?? )

since T(s) = h(s) = s , we arrive at f(h(s)) = f(s) and more ;

f(s) = exp(f(s)) = exp(s) =  s for that primary fixpoint of exp !



***

Better estimates remain important ofcourse ...


But lets consider further.

I do not think the superfunction of T(s) has ever been considered.

I find it interesting because T(s) is close the successor function.
And that inspired me to consider this as part of a hyperoperator family.
Zeration , ackermann that kind of ideas with T(s).

Therefore I considered the slight generalization :

 T_v(s) = (s+v) / (exp(-s) + v).

T_e(s) is then an attractive idea.

What is T_v(s) a superfunction of ?

Is creating a super of T_v(s) simpler than for exp ? 

I considered carlemann matrices but it did not illuminate me.

regards

tommy1729
Reply
#8
Allow me to use .

Well, since,



We're solving a conjugate equation. This is difficult to solve with infinite compositions. This is precisely the thing Mphlee and I are talking about. If I have ; we can get . But if we have ; how do we get ?

This is precisely the trouble we're having, solving conjugate equations arbitrarily. The closest I can think of is,



Which if it converges, satisfies,



But then, we're entering in, where the hell does this converge? Additionally, what if we stick a function in between,



How do we derive convergence of this? How is it related to ?

Using infinite compositions for conjugate equations is out of the question. You can't solve these equations; as they equate to super function equations. You have to use a limiting process that is separate from an infinite composition. To explain this better,

If 



The trouble is; you are trying to solve an equation where ; that is constant in the first argument. Infinite compositions are nearly useless here. Unless; you use the above limiting process; and guess a solution using infinite compositions.

It's important to remember, that infinite compositions are perfect for



But if it's constant; there's no such luck. It's the same reason there is no infinite composition that produces tetration (without introducing a limit of some kind after the infinite composition). It's very much the equivalent of trying to make sense of ; it just diverges.

I am totally lost how to solve equivariant maps arbitrarily. Unless of course we open up a fixed point discussion. As the main purpose of my constructed tetration is to avoid relying on a fixed point (like with kneser); we only want to focus on behaviour at infinity. Also, I'm fairly confident that as . We have no normality like with kneser's, which tends to a fixed point or its conjugate. Which, is the feature I want from my tetration; no normality conditions as . I have a somewhat shoddy proof at the moment; I haven't had the eureka moment yet; but the numbers are implying a lack of normality. And there's good heuristic arguments that it is not normal at .

Regards, James
Reply
#9
related : https://math.eretrandre.org/tetrationfor...p?tid=1326
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  The fractal nature of iterated ln(x) [Bandwidth warning: lots of images!] jaydfox 16 34,104 09/09/2007, 01:21 AM
Last Post: jaydfox



Users browsing this thread: 1 Guest(s)