Tommy's Gaussian method.
#11
This is why, again, I think the beta methods all produce the same tetration. Choose a strip \( \mathcal{S}_{a,b} = \{ s \in \mathbb{C} | a \le \Im(s) \le b\} \). Suppose we have a family of functions \( f \in \mathcal{B} \), such for all \( [a,b] \) there exists an \( X \) such that:

\(
\log f(s+1) - f(s) = 0 \,\,\text{as}\,\,\Re(s) \to \infty\\
\)

And \( f(s) \) is holomorphic for \( \mathcal{S}_{a,b} \cap \{\Re(s) > X\} \); such we can topologically assign,

\(
\log f(\infty) - f(\infty) = 0\\
\)

I believe that all prospects which start from this hypothesis all produce \( \text{tet}_\beta \). Which means,

\(
\text{tet}_\beta(s + x_f) = \log^{\circ n} f(s+n)\\
\)

Your function \( \text{Tom}_A(s) = \log \text{Tom}(s+1) = A(s) \text{Tom}(s) \) is of such \( f \).


Mind you, this is just a conjecture. But it's been numerically evaluated fairly well.


EDIT:

I also forgot to mention that as \( a,b \to \infty \) as \( \Re(s) \to \infty \) we still get \( f(a(s)), f(b(s)) \to \infty \). Which is the statement that on the boundaries it approaches infinity; and when we do the pull back this will construct a tetration with \( \lim_{\Im(s) \to \infty} \text{tet}(s) = \infty \). I mean to say that all such tetration functions should be identical. Kneser satisfies all the above conditions until this one I just added. Forgot to mention, non-normality at \( \Im(s) = \pm \infty \).
#12
(07/23/2021, 04:13 PM)JmsNxn Wrote:
(07/22/2021, 12:13 PM)tommy1729 Wrote:
(07/22/2021, 02:21 AM)JmsNxn Wrote: You really don't have to go too much into depth in choosing your branch of logarithm. The principal branch is good enough if you add a \( \rho \) function.

If you write,

\(
\text{Tom}(s) = \Omega_{j=1}^\infty e^{A(s-j)z}\,\bullet z\\
\)

And,

\(
\text{Tom}_A(s) = A(s)\text{Tom}(s) = \log \text{Tom}(s+1)\\
\)

And construct a sequence \( \rho_n(s) \) where,

\(
\rho^{n+1}(s) = \log(1+\frac{\rho^n(s+1)}{\text{Tom}_A(s+1)}) + \log A(s+1)\\
\)

where \( \frac{\rho^n(s+1)}{\text{Tom}_A(s+1)} \) is very small for large \( \Re s \). So you are effectively calculating a \( \log(1+\Delta) \) for \( \Delta \) small, as opposed to a \( \log(X) \) where \( X \) is large. The branching won't be an issue at all.

Where then,

\(
\text{tet}_{\text{Tom}}(s + x_0) = \text{Tom}_A(s) + \rho(s)\\
\)


Remember Tommy that,

\(
\lim_{\Re(s) \to \infty} \text{Tom}_A(s) \to \infty\\
\)

Even though there are dips to zero this is still the asymptotic behaviour.

I take those dips very seriously.

If s is close to the real line that might work.

But for Im(s) substantially high things are more complicated I think.

For real s we can take the principal branch , in fact we must.
And then ofcourse by analytic continuation we do not need to bother with the other branches.

This indeed suggests we might not need not to look at other branches but for a proof I was pessimistic.

Let me put it like this ;

suppose we have tetration tet(s)

ln (ln ( tet(s+2) )) = tet(s) ONLY works with the correct branches.

example tet(s) = 1 + 400 pi i for some s.

regards

tommy1729

Absolutely Tommy!

But the branches are chosen by,

\(
\log \beta (s+1) = \beta(s) + \text{err}(s)\\
\)

So which ever branch satisfies this equation is the correct branch.

So, when you make a sequence of error functions \( \lim_{n\to\infty} \tau^n(s) \); given by,

\(
\tau^{n+1}(s) = \log(1+\frac{\tau^n(s+1)}{\beta(s+1)}) + \text{err}(s)\\
\)

you are choosing these logarithms already; because they're the only ones which satisfy the equation,

\(
\lim_{\Re(s) \to \infty} \tau^n(s) = 0\\
\)


If you were to right it the naive way,

\(
\lim_{n\to\infty} \log^{\circ n} \beta(s+n)\\
\)

Then, yes, choosing the branch is necessary. But if you write it the former way; there's no such problem.


As to the dips to zero; they produce the most trouble when trying to numerically evaluate. But in the math; they don't pose that much of a problem because we know that \( \lim_{\Re(s) \to \infty} \beta(s) = \infty \).

I want/wanted to prove boundedness without any assumptions such as analytic , continu , smooth etc.

IN A NUMERICAL WAY !!

If we start from the real line and assume analytic then the logs do not need branches.

the definion log( analytic f(z) ) = integral f ' (z)/f(z) for any f(z) (not = 0) will do.

or the taylor series does the work too.

the taylor of exp(exp(ln(ln(x)))) or ln(ln(exp(exp(x)))) is simply x without worrying about taking branches.

but clearly ln(exp(x)) does not equal x if we consider taking the principal value of ln ... since exp is periodic.
( a function of a periodic function is periodic normally ) 

so the taylor " automatically " picks the correct branches.

But we cannot assume analytic without proving it first !! A critic would call that circular logic !

Also I do not want to use statistical or intuitive arguments , because that is not a formal proof.

So I considered a more or less worst-case scenario from the perspective of numerical math.
( ingnoring decimal precision ofcourse )

That worst case numerical method shows a clear boundary.

that boundary can then be used to construct stronger proofs such as convergeance.
( fixed point methods or similar methods as the bounds )

It is well known that if a sequence of analytic functions f1 f2 f3 ... converges , then that limit is also analytic !!

And thus we would have a formal proof that the gaussian method is analytic near the positive real line.

I know you understand the difference between analytic and numerically very well.
But just to be clear ( to all ) I felt the need to reply this.

I know you are already convinced it is analytic but we need to get formal.



regards

tommy1729
#13
i would like to clarify a bit on why at most one branch jump.

apart from log(1/x) = - log(x) and other sign arguments or " small number " arguments ( who only give at most 2 pi branch jump or 2 pi abs change anyways  - as desired ! -  * which is essentially the same !! * ) there is also this ;

consider a sequence of continu ( on the complex plane , thus 2D continu ) functions f1 , f2 , ... , then jumping 2 branches would be impossible !

If we would jump 2* branches at a point s , ( *with respect to its neighbourhood ) then we must have a " created " discontinu point.

IT would mean manually creating a disconu jump of (at least in the subcomputation ) 2 pi i.

this is because this point s is no longer connected by an angle ( at most 2 pi i ) to its neighbourhood.

this is similar to ln(exp(s)) , you jump at most 1 branch for the ln with respect the neighbourhood , otherwise it is not a smooth path on the riemann surface and hence not analytic. 

In other words you cannot naturally get a sudden jump of 2 branches , and you are not " trying to interpret it analytically , rather deliberately trying to suggest it is not analytic ".

since if for FINITE n : f_n(s) is analytic , so is f_(n+1)(s+1) since it is just a log of an analytic function that is not 0 anywhere.
We cannot use induction here to imply it for n = oo but we can say that for n+1 we also get a continu function(*).

( A minor detail is I LEFT OUT THE CONDITION THAT THE DERIVATIVES ARE NOT GOING TO oo , but that is easy to overcome ! ( in particular for finite n and n + 1  )

Notice the log is a very smooth function , not having many bumps or anything.


regards

tommy1729
#14
An interesting idea is this :

Are all these " beta methods " equivalent ? as James asks.

And 

Is there a way to accelerate the convergeance of the iterations ? 

Series acceleration is well known but iteration acceleration not so much.

Those are 2 nice questions , but what is the interesting idea you might ask ??

Well that those 2 questions are related !!

let 

f_1(s) = exp( t(1 s) * f(s-1))

f_2(s) = exp( t(2 s) * f(s-1))

and the resp analytic tetrations from them : F1(s) and F2(s).

Remember that tetration(s + theta(s)) is also tetration where theta(s) is a suitable analytic real 1-periodic function. 

so F2(s) = F1(s + theta1(s))  , F1(s) = F2(s + theta2(s)).

BUT THIS ALSO IMPLIES THAT 

f*_2(s) = exp( t( 2*(s + theta2(s)) ) * f(s-1))    .. RESULTING IN F2_*(s) is actually equal to 

F2_*(s) = F2(s + theta(s)) = F1(s)

HENCE USING t(1 s) = t(s) is the same as using t( 2 s + 2 theta(s)) !!

So this relates to the main questions posed :

when are 2 solutions equal ?

How to accelerate convergeance ?

As for the acceleration , ofcourse the complexity and difficulty of theta and computing theta are key.

But numerically it is expected using t( 2 s + 2 theta(s) ) converges faster. ( because using t(2s) does converge faster than using t(s) ) .

---

Tom(s,v) = exp( t(v * s)   *   exp(Tom(s-1,v)) )

resulting in 

tet(s+1,v) = exp( tet(s,v) ).

I Like that notation.

---

regards

tommy1729
#15
I understand what you are saying Tommy, but it seems like extra work for no reason. The whole point of having an asymptotic solution is that we can write the tetration as,

\(
\text{tet}_{\text{Tom}}(s+x_T) = \text{Tom}_A(s) + \tau(s)\\
\)

Where \( \lim_{\Re(s) \to \infty} \tau(s) = 0 \). In your case, the error will look like \( \mathcal{O}(e^{-s^2/2}) \) at least. It's much easier to just find this small number; than worrying about taking certain branches of log's on large numbers. It's the difference between calculating \( \log(X) \) for large \( X \) and \( \log(1+\Delta) \) for small \( \Delta \). The latter way is much simpler.

I can prove the Tommy method converges on \( \mathbb{C}/(-\infty,-2] \); it's just a slight adaptation of showing that the beta method converges. I'll do a quick write-up; a lot of it is line for line from my paper. I'm just concerned with whether it's the beta method or not. If it is, it's a much more efficient error than the beta method--and a huge quality of life upgrade.

Regards,
#16
(07/25/2021, 11:59 PM)JmsNxn Wrote: I understand what you are saying Tommy, but it seems like extra work for no reason. The whole point of having an asymptotic solution is that we can write the tetration as,

\(
\text{tet}_{\text{Tom}}(s+x_T) = \text{Tom}_A(s) + \tau(s)\\
\)

Where \( \lim_{\Re(s) \to \infty} \tau(s) = 0 \). In your case, the error will look like \( \mathcal{O}(e^{-s^2/2}) \) at least. It's much easier to just find this small number; than worrying about taking certain branches of log's on large numbers. It's the difference between calculating \( \log(X) \) for large \( X \) and \( \log(1+\Delta) \) for small \( \Delta \). The latter way is much simpler.

I can prove the Tommy method converges on \( \mathbb{C}/(-\infty,-2] \); it's just a slight adaptation of showing that the beta method converges. I'll do a quick write-up; a lot of it is line for line from my paper. I'm just concerned with whether it's the beta method or not. If it is, it's a much more efficient error than the beta method--and a huge quality of life upgrade.

Regards,

check out my other answer too Smile

regards

tommy1729
#17
(07/25/2021, 11:58 PM)tommy1729 Wrote: An interesting idea is this :

Are all these " beta methods " equivalent ? as James asks.

And 

Is there a way to accelerate the convergeance of the iterations ? 

Series acceleration is well known but iteration acceleration not so much.

Those are 2 nice questions , but what is the interesting idea you might ask ??

Well that those 2 questions are related !!

let 

f_1(s) = exp( t(1 s) * f(s-1))

f_2(s) = exp( t(2 s) * f(s-1))

and the resp analytic tetrations from them : F1(s) and F2(s).

Remember that tetration(s + theta(s)) is also tetration where theta(s) is a suitable analytic real 1-periodic function. 

so F2(s) = F1(s + theta1(s))  , F1(s) = F2(s + theta2(s)).

BUT THIS ALSO IMPLIES THAT 

f*_2(s) = exp( t( 2*(s + theta2(s)) ) * f(s-1))    .. RESULTING IN F2_*(s) is actually equal to 

F2_*(s) = F2(s + theta(s)) = F1(s)

HENCE USING t(1 s) = t(s) is the same as using t( 2 s + 2 theta(s)) !!

So this relates to the main questions posed :

when are 2 solutions equal ?

How to accelerate convergeance ?

As for the acceleration , ofcourse the complexity and difficulty of theta and computing theta are key.

But numerically it is expected using t( 2 s + 2 theta(s) ) converges faster. ( because using t(2s) does converge faster than using t(s) ) .

---

Tom(s,v) = exp( t(v * s)   *   exp(Tom(s-1,v)) )

resulting in 

tet(s+1,v) = exp( tet(s,v) ).

I Like that notation.

---

regards

tommy1729

Oh you must've posted this right as I posted mine, I missed it.

Yes, I agree with you entirely here. I think it's similar to what Kouznetsov did when he constructed his general form of the superfunction equation. Where, Kouznetsov chose an asymptotic function,

\(
f_M(z) = L +\sum_{n=1}^M a_n exp{zL n}\\
\text{tet}_{K}(z) = \lim_{n\to\infty} \exp^{\circ n} f_M(z-n)
\)

Where \( M \) was just a degree of "how well we are approximating." But, it had no effect on the final tetration--it still created Kneser.

I think we are in a similar situation here. Where all these asymptotic tetrations are all going to be \( \text{tet}_\beta \). And they are characterized by the fact \( \lim_{\Im(s) \to \infty} \text{tet}(s) = \infty \). I can't think of an obvious uniqueness condition though. Kneser has the benefit of being normal at infinity; non-normality tends to mean there's lots of room for errors and slight adjustments. Plus; we don't have the added benefit of a unique Fourier theta mapping--where we can just call on the uniqueness of Fourier coefficients (like what Paulsen and Cogwill did).


My only thoughts how we might do this was hit with a dead-end as I tried to write it up. It doesn't feel natural. But if we talk about,

\(
F_\lambda(s) = \lim_{n\to\infty} \log^{\circ n} \beta_\lambda(s+n)\\
\)

Which is the unique tetration with period \( 2\pi i / \lambda \) and holomorphy on an almost cylinder \( \mathbb{T} \) (which just means \( \overline{\mathbb{T}} \simeq \mathbb{C}/2\pi\mathbb{Z} \)). And then using a different kind of mapping we can transform between tetrations by creating a \( 1 \)-periodic function \( \lambda(s+1) = \lambda(s) \); then,

\(
\text{tet}_{WEIRD}(s) = F_{\lambda(s)}(s)\\
\)

Is a tetration function; and we'd be able to find a \( \lambda \) for Kneser; or any tetration really. But I can't think of a uniqueness condition that would guarantee that, there exists a unique \( \lambda \) such that,

\(
\text{tet}(s) = F_{\lambda(s)}(s)\\
\lim_{\Im(s) \to \infty} \text{tet}(s) = \infty\\
\)

I just made it to the point where there exists \( \lambda^+, \lambda^- \) which are holomorphic in the upper/lower half planes (resp.) in which,

\(
\text{tet}_\beta(s) = F_{\lambda^+(s)}(s)\,\,\text{for}\,\,\Im(s) > 0\\
\text{tet}_\beta(s) = F_{\lambda^-(s)}(s)\,\,\text{for}\,\,\Im(s) < 0\\
\lim_{|\Im(s)| \to \infty} \lambda^{\pm}(s) = 0\\
\)

So that we have a fourier series,

\(
\lambda^+(s) = \sum_{k=1}^\infty c_k e^{2\pi i ks}\\
\lambda^-(s) = \sum_{k=1}^\infty \overline{c_k} e^{-2\pi i ks}\\
\)

I couldn't think of any obvious arguments that the sequence \( c_k \) is unique though... So I gave up on that paper and focused on better investigating the programming. And no matter how you change the initial asymptotic tetration function--they all seem to give \( \text{tet}_\beta \). So I'm at least reinforcing the numerical evidence, lol.

Regards, James.
#18
Again I want to give some more details about my thoughts.

considering 2 things.

First a note on functional inverse.

I estimated the bounds of ln ln ... a^b^...(A)

but let B = ln ln ... a^b^...(A)

then A = ln_ln(...)(...ln_ln(b)(ln_ln(a)(exp(exp...exp(B))))))

notice how this gives similar bounds by the same method !!

so the function is bounded and its inverse is also bounded !!

That is a powerful thing !!

Also This is a nice alternative way of looking at it , that might convince ppl who were perhaps confused or arguing " chaos as objection " for one them.

***

The second thing is this thought experiment :

Let abs(s) < sqrt(2)

Then how does ln^[n](exp^[n](s) + C) behave ?

It turns out for real s this converges nice and any real C > 0 this converges fast.

However when s or C are complex things are slightly different !

This is hard to test numerically because of overflow or even with calculus.

But it is clear that when C goes to zero ( as function of n ) FAST ENOUGH , then it converges to s.

so how fast is fast ??

Well for abs(s) < sqrt(2) and sufficiently large n ( usually 2,3  or 4 will already do ) :

ln^[n](exp^[n](s) + exp(- n^2) ) = s + o(exp(- n^2)) + o(exp(- n^2)) i

(if we take the correct branches).

Now this looks familiar not ??



This strenghtens my previous ideas and bound arguments ofcourse.



regards

tommy1729
#19
(07/28/2021, 12:02 AM)tommy1729 Wrote: Again I want to give some more details about my thoughts.

considering 2 things.

First a note on functional inverse.

I estimated the bounds of ln ln ... a^b^...(A)

but let B = ln ln ... a^b^...(A)

then A = ln_ln(...)(...ln_ln(b)(ln_ln(a)(exp(exp...exp(B))))))

notice how this gives similar bounds by the same method !!

so the function is bounded and its inverse is also bounded !!

That is a powerful thing !!

Also This is a nice alternative way of looking at it , that might convince ppl who were perhaps confused or arguing " chaos as objection " for one them.

***

The second thing is this thought experiment :

Let abs(s) < sqrt(2)

Then how does ln^[n](exp^[n](s) + C) behave ?

It turns out for real s this converges nice and any real C > 0 this converges fast.

However when s or C are complex things are slightly different !

This is hard to test numerically because of overflow or even with calculus.

But it is clear that when C goes to zero ( as function of n ) FAST ENOUGH , then it converges to s.

so how fast is fast ??

Well for abs(s) < sqrt(2) and sufficiently large n ( usually 2,3  or 4 will already do ) :

ln^[n](exp^[n](s) + exp(- n^2) ) = s + o(exp(- n^2)) + o(exp(- n^2)) i

(if we take the correct branches).

Now this looks familiar not ??



This strenghtens my previous ideas and bound arguments ofcourse.



regards

tommy1729

Yes Tommy;

and for large \( s \) we'll get that

\(
u(s) = \text{Tom}_A(s) - \beta(s) = \mathcal{O}(e^{-\sqrt{s}});
\)

And so we should get that,

\(
\log^{\circ n} \text{Tom}_A(s+n) = \log^{\circ n} \beta(s+n) + u(s+n) = \log^{\circ n} (\beta(s+n)) + e^{-\sqrt{n}} \mathcal{O}(e^{-\sqrt{s}})\\
\)

Or something that looks like this. The errors are something like this; these are just rough estimates. Still haven't found a good way to compare.

I'm glad you're on the same page as me. I do feel your approximation is definitely better; but I'm of the view that this is all just the beta method. The trouble with your method though is that it doesn't create the periodic tetrations; which I think are very important--and can be used to classify tetrations.

I love the idea of using a gaussian as opposed to a logistic approach.

Regards, James
#20
I want to point out the maximum modulus principle , jensens formula and nevanlinna theory are strongly related and important here.

Why ? because by those we can bound the value of the log(z) by the value of the log in the neighbourhood : log(z+y).

Jensen also confirms that multiplicity does not go crazy locally in the limit process.

This all helps to prove boundedness , convergeance and analyticity.

remember that log(0) and 0 do not occur by taking the log^[n] , hence making these theorems valid !!

And therefore helping in both the inner and outer proof methods.
( by induction , assuming it starts as true , which we can numerically check ( very likely ) )



regards

tommy1729

Tom Marcel Raes


Possibly Related Threads…
Thread Author Replies Views Last Post
  " tommy quaternion " tommy1729 41 21,272 05/23/2023, 07:56 PM
Last Post: tommy1729
  The ultimate beta method JmsNxn 8 2,086 04/15/2023, 02:36 AM
Last Post: JmsNxn
  [NT] Caleb stuff , mick's MSE and tommy's diary functions tommy1729 0 464 02/26/2023, 08:37 PM
Last Post: tommy1729
  greedy method for tetration ? tommy1729 0 510 02/11/2023, 12:13 AM
Last Post: tommy1729
  tommy's "linear" summability method tommy1729 15 3,824 02/10/2023, 03:55 AM
Last Post: JmsNxn
  another infinite composition gaussian method clone tommy1729 2 941 01/24/2023, 12:53 AM
Last Post: tommy1729
  Semi-group iso , tommy's limit fix method and alternative limit for 2sinh method tommy1729 1 880 12/30/2022, 11:27 PM
Last Post: tommy1729
  [MSE] short review/implem. of Andy's method and a next step Gottfried 4 1,742 11/03/2022, 11:51 AM
Last Post: Gottfried
  tommy's group addition isomo conjecture tommy1729 1 925 09/16/2022, 12:25 PM
Last Post: tommy1729
  tommy's displacement equation tommy1729 1 960 09/16/2022, 12:24 PM
Last Post: tommy1729



Users browsing this thread: 2 Guest(s)