Posts: 977
Threads: 114
Joined: Dec 2010
07/23/2021, 11:18 PM
(This post was last modified: 07/25/2021, 05:48 AM by JmsNxn.)
This is why, again, I think the beta methods all produce the same tetration. Choose a strip . Suppose we have a family of functions , such for all there exists an such that:
And is holomorphic for ; such we can topologically assign,
I believe that all prospects which start from this hypothesis all produce . Which means,
Your function is of such .
Mind you, this is just a conjecture. But it's been numerically evaluated fairly well.
EDIT:
I also forgot to mention that as as we still get . Which is the statement that on the boundaries it approaches infinity; and when we do the pull back this will construct a tetration with . I mean to say that all such tetration functions should be identical. Kneser satisfies all the above conditions until this one I just added. Forgot to mention, nonnormality at .
Posts: 1,700
Threads: 374
Joined: Feb 2009
(07/23/2021, 04:13 PM)JmsNxn Wrote: (07/22/2021, 12:13 PM)tommy1729 Wrote: (07/22/2021, 02:21 AM)JmsNxn Wrote: You really don't have to go too much into depth in choosing your branch of logarithm. The principal branch is good enough if you add a function.
If you write,
And,
And construct a sequence where,
where is very small for large . So you are effectively calculating a for small, as opposed to a where is large. The branching won't be an issue at all.
Where then,
Remember Tommy that,
Even though there are dips to zero this is still the asymptotic behaviour.
I take those dips very seriously.
If s is close to the real line that might work.
But for Im(s) substantially high things are more complicated I think.
For real s we can take the principal branch , in fact we must.
And then ofcourse by analytic continuation we do not need to bother with the other branches.
This indeed suggests we might not need not to look at other branches but for a proof I was pessimistic.
Let me put it like this ;
suppose we have tetration tet(s)
ln (ln ( tet(s+2) )) = tet(s) ONLY works with the correct branches.
example tet(s) = 1 + 400 pi i for some s.
regards
tommy1729
Absolutely Tommy!
But the branches are chosen by,
So which ever branch satisfies this equation is the correct branch.
So, when you make a sequence of error functions ; given by,
you are choosing these logarithms already; because they're the only ones which satisfy the equation,
If you were to right it the naive way,
Then, yes, choosing the branch is necessary. But if you write it the former way; there's no such problem.
As to the dips to zero; they produce the most trouble when trying to numerically evaluate. But in the math; they don't pose that much of a problem because we know that .
I want/wanted to prove boundedness without any assumptions such as analytic , continu , smooth etc.
IN A NUMERICAL WAY !!
If we start from the real line and assume analytic then the logs do not need branches.
the definion log( analytic f(z) ) = integral f ' (z)/f(z) for any f(z) (not = 0) will do.
or the taylor series does the work too.
the taylor of exp(exp(ln(ln(x)))) or ln(ln(exp(exp(x)))) is simply x without worrying about taking branches.
but clearly ln(exp(x)) does not equal x if we consider taking the principal value of ln ... since exp is periodic.
( a function of a periodic function is periodic normally )
so the taylor " automatically " picks the correct branches.
But we cannot assume analytic without proving it first !! A critic would call that circular logic !
Also I do not want to use statistical or intuitive arguments , because that is not a formal proof.
So I considered a more or less worstcase scenario from the perspective of numerical math.
( ingnoring decimal precision ofcourse )
That worst case numerical method shows a clear boundary.
that boundary can then be used to construct stronger proofs such as convergeance.
( fixed point methods or similar methods as the bounds )
It is well known that if a sequence of analytic functions f1 f2 f3 ... converges , then that limit is also analytic !!
And thus we would have a formal proof that the gaussian method is analytic near the positive real line.
I know you understand the difference between analytic and numerically very well.
But just to be clear ( to all ) I felt the need to reply this.
I know you are already convinced it is analytic but we need to get formal.
regards
tommy1729
Posts: 1,700
Threads: 374
Joined: Feb 2009
i would like to clarify a bit on why at most one branch jump.
apart from log(1/x) =  log(x) and other sign arguments or " small number " arguments ( who only give at most 2 pi branch jump or 2 pi abs change anyways  as desired !  * which is essentially the same !! * ) there is also this ;
consider a sequence of continu ( on the complex plane , thus 2D continu ) functions f1 , f2 , ... , then jumping 2 branches would be impossible !
If we would jump 2* branches at a point s , ( *with respect to its neighbourhood ) then we must have a " created " discontinu point.
IT would mean manually creating a disconu jump of (at least in the subcomputation ) 2 pi i.
this is because this point s is no longer connected by an angle ( at most 2 pi i ) to its neighbourhood.
this is similar to ln(exp(s)) , you jump at most 1 branch for the ln with respect the neighbourhood , otherwise it is not a smooth path on the riemann surface and hence not analytic.
In other words you cannot naturally get a sudden jump of 2 branches , and you are not " trying to interpret it analytically , rather deliberately trying to suggest it is not analytic ".
since if for FINITE n : f_n(s) is analytic , so is f_(n+1)(s+1) since it is just a log of an analytic function that is not 0 anywhere.
We cannot use induction here to imply it for n = oo but we can say that for n+1 we also get a continu function(*).
( A minor detail is I LEFT OUT THE CONDITION THAT THE DERIVATIVES ARE NOT GOING TO oo , but that is easy to overcome ! ( in particular for finite n and n + 1 )
Notice the log is a very smooth function , not having many bumps or anything.
regards
tommy1729
Posts: 1,700
Threads: 374
Joined: Feb 2009
An interesting idea is this :
Are all these " beta methods " equivalent ? as James asks.
And
Is there a way to accelerate the convergeance of the iterations ?
Series acceleration is well known but iteration acceleration not so much.
Those are 2 nice questions , but what is the interesting idea you might ask ??
Well that those 2 questions are related !!
let
f_1(s) = exp( t(1 s) * f(s1))
f_2(s) = exp( t(2 s) * f(s1))
and the resp analytic tetrations from them : F1(s) and F2(s).
Remember that tetration(s + theta(s)) is also tetration where theta(s) is a suitable analytic real 1periodic function.
so F2(s) = F1(s + theta1(s)) , F1(s) = F2(s + theta2(s)).
BUT THIS ALSO IMPLIES THAT
f*_2(s) = exp( t( 2*(s + theta2(s)) ) * f(s1)) .. RESULTING IN F2_*(s) is actually equal to
F2_*(s) = F2(s + theta(s)) = F1(s)
HENCE USING t(1 s) = t(s) is the same as using t( 2 s + 2 theta(s)) !!
So this relates to the main questions posed :
when are 2 solutions equal ?
How to accelerate convergeance ?
As for the acceleration , ofcourse the complexity and difficulty of theta and computing theta are key.
But numerically it is expected using t( 2 s + 2 theta(s) ) converges faster. ( because using t(2s) does converge faster than using t(s) ) .

Tom(s,v) = exp( t(v * s) * exp(Tom(s1,v)) )
resulting in
tet(s+1,v) = exp( tet(s,v) ).
I Like that notation.

regards
tommy1729
Posts: 977
Threads: 114
Joined: Dec 2010
I understand what you are saying Tommy, but it seems like extra work for no reason. The whole point of having an asymptotic solution is that we can write the tetration as,
Where . In your case, the error will look like at least. It's much easier to just find this small number; than worrying about taking certain branches of log's on large numbers. It's the difference between calculating for large and for small . The latter way is much simpler.
I can prove the Tommy method converges on ; it's just a slight adaptation of showing that the beta method converges. I'll do a quick writeup; a lot of it is line for line from my paper. I'm just concerned with whether it's the beta method or not. If it is, it's a much more efficient error than the beta methodand a huge quality of life upgrade.
Regards,
Posts: 1,700
Threads: 374
Joined: Feb 2009
Posts: 977
Threads: 114
Joined: Dec 2010
07/26/2021, 10:24 PM
(This post was last modified: 07/26/2021, 11:28 PM by JmsNxn.)
(07/25/2021, 11:58 PM)tommy1729 Wrote: An interesting idea is this :
Are all these " beta methods " equivalent ? as James asks.
And
Is there a way to accelerate the convergeance of the iterations ?
Series acceleration is well known but iteration acceleration not so much.
Those are 2 nice questions , but what is the interesting idea you might ask ??
Well that those 2 questions are related !!
let
f_1(s) = exp( t(1 s) * f(s1))
f_2(s) = exp( t(2 s) * f(s1))
and the resp analytic tetrations from them : F1(s) and F2(s).
Remember that tetration(s + theta(s)) is also tetration where theta(s) is a suitable analytic real 1periodic function.
so F2(s) = F1(s + theta1(s)) , F1(s) = F2(s + theta2(s)).
BUT THIS ALSO IMPLIES THAT
f*_2(s) = exp( t( 2*(s + theta2(s)) ) * f(s1)) .. RESULTING IN F2_*(s) is actually equal to
F2_*(s) = F2(s + theta(s)) = F1(s)
HENCE USING t(1 s) = t(s) is the same as using t( 2 s + 2 theta(s)) !!
So this relates to the main questions posed :
when are 2 solutions equal ?
How to accelerate convergeance ?
As for the acceleration , ofcourse the complexity and difficulty of theta and computing theta are key.
But numerically it is expected using t( 2 s + 2 theta(s) ) converges faster. ( because using t(2s) does converge faster than using t(s) ) .

Tom(s,v) = exp( t(v * s) * exp(Tom(s1,v)) )
resulting in
tet(s+1,v) = exp( tet(s,v) ).
I Like that notation.

regards
tommy1729
Oh you must've posted this right as I posted mine, I missed it.
Yes, I agree with you entirely here. I think it's similar to what Kouznetsov did when he constructed his general form of the superfunction equation. Where, Kouznetsov chose an asymptotic function,
Where was just a degree of "how well we are approximating." But, it had no effect on the final tetrationit still created Kneser.
I think we are in a similar situation here. Where all these asymptotic tetrations are all going to be . And they are characterized by the fact . I can't think of an obvious uniqueness condition though. Kneser has the benefit of being normal at infinity; nonnormality tends to mean there's lots of room for errors and slight adjustments. Plus; we don't have the added benefit of a unique Fourier theta mappingwhere we can just call on the uniqueness of Fourier coefficients (like what Paulsen and Cogwill did).
My only thoughts how we might do this was hit with a deadend as I tried to write it up. It doesn't feel natural. But if we talk about,
Which is the unique tetration with period and holomorphy on an almost cylinder (which just means ). And then using a different kind of mapping we can transform between tetrations by creating a periodic function ; then,
Is a tetration function; and we'd be able to find a for Kneser; or any tetration really. But I can't think of a uniqueness condition that would guarantee that, there exists a unique such that,
I just made it to the point where there exists which are holomorphic in the upper/lower half planes (resp.) in which,
So that we have a fourier series,
I couldn't think of any obvious arguments that the sequence is unique though... So I gave up on that paper and focused on better investigating the programming. And no matter how you change the initial asymptotic tetration functionthey all seem to give . So I'm at least reinforcing the numerical evidence, lol.
Regards, James.
Posts: 1,700
Threads: 374
Joined: Feb 2009
Again I want to give some more details about my thoughts.
considering 2 things.
First a note on functional inverse.
I estimated the bounds of ln ln ... a^b^...(A)
but let B = ln ln ... a^b^...(A)
then A = ln_ln(...)(...ln_ln(b)(ln_ln(a)(exp(exp...exp(B))))))
notice how this gives similar bounds by the same method !!
so the function is bounded and its inverse is also bounded !!
That is a powerful thing !!
Also This is a nice alternative way of looking at it , that might convince ppl who were perhaps confused or arguing " chaos as objection " for one them.
***
The second thing is this thought experiment :
Let abs(s) < sqrt(2)
Then how does ln^[n](exp^[n](s) + C) behave ?
It turns out for real s this converges nice and any real C > 0 this converges fast.
However when s or C are complex things are slightly different !
This is hard to test numerically because of overflow or even with calculus.
But it is clear that when C goes to zero ( as function of n ) FAST ENOUGH , then it converges to s.
so how fast is fast ??
Well for abs(s) < sqrt(2) and sufficiently large n ( usually 2,3 or 4 will already do ) :
ln^[n](exp^[n](s) + exp( n^2) ) = s + o(exp( n^2)) + o(exp( n^2)) i
(if we take the correct branches).
Now this looks familiar not ??
This strenghtens my previous ideas and bound arguments ofcourse.
regards
tommy1729
Posts: 977
Threads: 114
Joined: Dec 2010
07/28/2021, 12:24 AM
(This post was last modified: 07/28/2021, 02:57 AM by JmsNxn.)
(07/28/2021, 12:02 AM)tommy1729 Wrote: Again I want to give some more details about my thoughts.
considering 2 things.
First a note on functional inverse.
I estimated the bounds of ln ln ... a^b^...(A)
but let B = ln ln ... a^b^...(A)
then A = ln_ln(...)(...ln_ln(b)(ln_ln(a)(exp(exp...exp(B))))))
notice how this gives similar bounds by the same method !!
so the function is bounded and its inverse is also bounded !!
That is a powerful thing !!
Also This is a nice alternative way of looking at it , that might convince ppl who were perhaps confused or arguing " chaos as objection " for one them.
***
The second thing is this thought experiment :
Let abs(s) < sqrt(2)
Then how does ln^[n](exp^[n](s) + C) behave ?
It turns out for real s this converges nice and any real C > 0 this converges fast.
However when s or C are complex things are slightly different !
This is hard to test numerically because of overflow or even with calculus.
But it is clear that when C goes to zero ( as function of n ) FAST ENOUGH , then it converges to s.
so how fast is fast ??
Well for abs(s) < sqrt(2) and sufficiently large n ( usually 2,3 or 4 will already do ) :
ln^[n](exp^[n](s) + exp( n^2) ) = s + o(exp( n^2)) + o(exp( n^2)) i
(if we take the correct branches).
Now this looks familiar not ??
This strenghtens my previous ideas and bound arguments ofcourse.
regards
tommy1729
Yes Tommy;
and for large we'll get that
And so we should get that,
Or something that looks like this. The errors are something like this; these are just rough estimates. Still haven't found a good way to compare.
I'm glad you're on the same page as me. I do feel your approximation is definitely better; but I'm of the view that this is all just the beta method. The trouble with your method though is that it doesn't create the periodic tetrations; which I think are very importantand can be used to classify tetrations.
I love the idea of using a gaussian as opposed to a logistic approach.
Regards, James
Posts: 1,700
Threads: 374
Joined: Feb 2009
I want to point out the maximum modulus principle , jensens formula and nevanlinna theory are strongly related and important here.
Why ? because by those we can bound the value of the log(z) by the value of the log in the neighbourhood : log(z+y).
Jensen also confirms that multiplicity does not go crazy locally in the limit process.
This all helps to prove boundedness , convergeance and analyticity.
remember that log(0) and 0 do not occur by taking the log^[n] , hence making these theorems valid !!
And therefore helping in both the inner and outer proof methods.
( by induction , assuming it starts as true , which we can numerically check ( very likely ) )
regards
tommy1729
Tom Marcel Raes
