# Tetration Forum

Full Version: Bounded Analytic Hyper operators
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3
Hey everyone, check out my paper! It's been written formal and rigorous, written in a purist complex analysis format. It's about the bounded analytic hyper operators, those with a base in between 1 and eta, and fractional iteration through the lens of fractional calculus. I've centered it around Ramanujan's master theorem, using this theorem as a base, but it gives some very advantageous results. I give an expression for when using ramanujan's master theorem. This is not as easy as it sounds off hand, but I do believe it's all been proved. I've uploaded it on arxiv, I've just been waiting for the conclusion of the wait period.

Thank you if you do read it, I really appreciate any support or comments or suggestions. I am trying to get this published, and I would have never been able to do this without the existence of this community. It has just knocked me on the head so many times that I finally eventually got the rigor and formality that my thoughts have now. I really hope this paper makes up for all my stupidities when I have posted here. This is really a big step lol.
OMG ... i'm reading at page 4... and looks so amazing... Finally I understand why the auxiliary functions... this is really "as beautiful as the Gamma function interpolates the factorial" (as you told me)...!!!
I guess that for me has come the time to study complex analisys! That is a fantastic paper!

I have an idea on how to extend this to real bases greater than eta
Define, for integer , the super root as the inverse of tetration in the base so that we have
Then we simply apply your method to factor this in :

Then we can simply invert in z.
...unfortunately, there does not seem to be a nice recursion relation between super roots that would force the result to be a tetration.
But it seems to converge numerically to the super root for your tetration.
¿Is there any way to deduce from it what is or ?

I mean, given n and x, what is m as function of n (and x)?
@Marraco read Jmsn's paper at page 15, theorem 4.1. It gives a closed form for complex tetration but only for a small set of bases.
@JmsNxn
I don't yet understand all the analysis behind but I'm tryng to follow the logical implications of the lemmas. If I get it is the Ramanujan's that makes all the works (but we have the exp bound requirmeent).

The other interesting point is the differential equation that holds for the auxiliary function (seems crazy). An this is the really interesting trick imho. But pls, tell me if I get it (started to study calculus just 1 month ago).

We have a power series

and its derivative becomes the following (D distributes over addition, commutes with product and appliyng power rule we get a "shift" of indexes in the series )

Now comes the trick... (Am I right?) Define the coefficents in the following form

that becomes . Replace this in the prevous series and

So if we define

We could assume to have the following

and that

and we hope that

At this point you set (in other notation youre using its xi-based superfunction ) so you can send differentiations by w to iteration by the transfer function (or application of the right-composition operator).

Seeing this I now understand why you told me, long ago, that this can be used for the fractional ranks... but now you talk only about linear operators and the recursion (I mean the b-based superfunction operator) and the antirecursion/subfunction operator aren't linear. Also, just before Lemma 3.1, you say that this is not proved for operators different than the left-composition yet. Why?

Why can't we just repeat the trick using other sequences like using a sequence of values of your bounded hyperoperators indexed by the rank?
We just need to be exp. bounded or I've missed something important somewhere in your paper?

For example we could try to set in it one of these two sequences

Given an invertible function and a in its domain

define the direct antirecursion sequence of and its inverse antirecursion sequence

and

and

Those two sequences of functions give us two sequences of real numbers and for a fixed

btw those sequences satisfie those recurrences

I havent read the whole paper , but it seems good.
Is it really new ? Why didnt Ramanujan see this ?

1

I really like the idea for the super root.

I wonder if it agrees with other methods.

2

For a sequence of iterates going to hyperb fix ,
Does this agree with Koenigs ?

I think so.

I think proving that in the paper would impress.

3

I think this can be applied to nonbounded operators with some tricks.

Regards

Tommy1729
Off topic: @Tommy Do you mean non-linear operators? From wiki I see that boundness comes with the definition of a norm... what is the relationship between bounded operators and linear ones? Wiki page about operators is a little bit chaotic.
What I meant is that you have tetration !

Not just for the bases between 1 and eta but for all real bases larger than 1.

Not sure if you realise it yet.

Here is a sketchy way to show it :

In short since you can interpolate analyticly x^x^... = m

where ... are integer iterations and m is a real > e ...

You can THUS solve for the x in tet_x(t) = m for a given m > e and t >0.

( x is the base ).

But this also means that you can solve for t since you can set up the equation RAM(m,t) = x for any desired x.

WHen you have this t , you have found tet_base_x(t) = m for a given m.

In other words from the relation tet_x(t) = m you can solve for either x or t.

therefore you can solve sexp_x(t) = m

which is slog_x(m).

Then invert this function and you have sexp for any base > 1.

Since all of this is done analyticly you have found tetration.

And it seems simpler then some other methods , like Kneser or Cauchy.

Hope this is clear enough.
I can explain more if required.

SO JmsNxn finally has his own method , with credit to the brilliant comment of fivexthethird.

( Im thinking of a variant of this method too )

I just wonder what this will be called ...

JMS method ? JN method ? Jms5x3 method ? Jms5x31729 method I already started calling it in my head " Ramanujan-Lagrange method ".
The reason seems clear : Ramanujan's master theorem and Lagrange's inversion theorem.

For those unfamiliar :

http://en.wikipedia.org/wiki/Lagrange_inversion_theorem

regards

tommy1729
It seems that this method is equivalent to the method of newton series.
To see this, note that

where
So we can rewrite the integral as

Since the power series defines an entire function we can exchange the integral and sum so that we have

where is the falling factorial.
This is just the newton series of f around 0.
TPID 13 is thus solved, as satisfies the bounds required for Ramanujan's master theorem to apply.

@tommy: I don't see how inverting around t helps us recover tetration from the interpolated super root... it gets us the slog, yes, but in either case we need to just invert around m to get tetration.
Or am I misinterpreting your post?
Pages: 1 2 3