# Tetration Forum

Full Version: Tommy's Gaussian method.
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4
Hello everyone !

Time to get serious.

Another infinite composition method.

This time I took care of unneccessary complications such a branch points, singularities etc.

Periodic points remain a topic however.

A sketch of the idea :

Let oo denote real infinity.

Basically it combines these 3 :

1) https://math.eretrandre.org/tetrationfor...p?tid=1320

2) https://math.eretrandre.org/tetrationfor...p?tid=1326

3) And most importantly the following f(s) ;

Tommy's Gaussian method :

f(s) = exp(t(s) * f(s-1))

t(s) = (erf(s)+1)/2

Notice that t(s - oo) = 0 and t(s + oo) = 1 for all (finite complex) s.

In particular
IF 2 + Re(w)^4 < Re(w)^6 < Im(w)^6 - Re(w)^2
THEN t(w)^2 is close to 0 or 1.
Even more so when Im(w)^2 is small compared to Re(w)^2.
The continued fraction for t(s) gives a good idea how it grows on the real line ; it grows at speeds about exp(x^2) to 0 or 1.

A visual of t(w) would demonstrate that it converges fast to 0 or 1 in the (resp) left and right triangle of an x shaped region.
That x shape is almost defined by Re(w)^2 = Im(w)^2 thus making approximately 4 90 degree angles at the origin and having only straith lines.

Therefore we can consistantly define for all s without singularities or poles ( hence t(s) and f(s) being entire ! )

f(s) = exp( t(s) * exp( t(s-1) * exp( t(s-2) * ...

thereby making f(s) an entire function !

Now we pick a point say e.

And we can try the ideas from

1) https://math.eretrandre.org/tetrationfor...p?tid=1320

2) https://math.eretrandre.org/tetrationfor...p?tid=1326

to consistently define

exp^[s](e)

and then by analytic continuation from e to z ;

exp^[s](z).

We know this analytic continuation exists because f(s) is entire and for some appropriate q we must have exp^[q](e) = z.

By picking the correct branch we also got the slog function.

It should be as simple as ( using small o notation )

lim n to +oo , Re( R ) > 0 ;

exp^[R](z) = ln^[n] ( f( g(z) + n + R) ) + o( t(-n+R) )

and ofcourse using the appropriate brances of ln and g.

regards

tommy1729

Tom Marcel Raes
Just want to be clear; we're choosing,

$
\text{Erf}(s) = \frac{2}{\sqrt{\pi}} \int_0^s e^{-x^2}\,dx\\
t(s) = (\text{Erf}(s)+1)/2\\
$

And then we want to construct,

$
f(s) = \Omega_{j=1}^\infty e^{t(s-j)z}\,\bullet z\\
$

Which absolutely is entire in $s$. I thought I'd translate what you are saying a bit more clearly--just so I understand you well enough.

First of all if $1 \in \mathcal{N}$ where $\mathcal{N}$ is a domain and $\mathcal{S}$ is a domain,

$
\sum_{j=1}^\infty ||e^{t(s-j)z} - 1||_{s \in \mathcal{S}, z \in \mathcal{N}} < \infty\\
$

This is absolutely an entire function. So you're right tommy.

It satisfies the asymptotic equation,

$
\log f(s+1) = t(s)f(s)\\
$

But since $t(s) \to 1$ as $\Re(s) \to \infty$; it will produce the asymptotic,

$
\log f(s+1) \sim f(s) \,\,\text{as}\,\,\Re(s) \to \infty\\
$

So much of my theorem work could be ported over to this problem; I think it'd be tricky. But it's probably doable.

Tommy, I'd like to propose we call this approach the "multiplicative approach" and mine "the additive approach". Based solely on what the error terms look like in the equation,

$
\log f(s+1) = A(s)f(s)\,\,\text{is the multiplicative equation}\\
\log f(s+1) = f(s) + B(s)\,\,\text{is the additive equation}\\
$

I do believe your method will work. And I do believe if you take functions which are similar to the error function they'll produce the same tetration. That's something really surprising I've found from my work. Is that the logistic function is almost arbitrary, any sort of similar mapping will produce the same final tetration function.

I guess my biggest question.... Imagine this produces a THIRD tetration!

I can't really see how to hammer out the kinks on this method. You have to somehow invent a good banach fixed point argument. But on the real-line it converges absolutely; I'd bet a million it's analytic AT LEAST on $\mathbb{R}$.

Hmmm, Tommy. I'd need you to elaborate to get it all.
I thought I'd write a quick proof sketch that Tommy's method will probably work. There are obviously kinks which need to be worked out; and gaps, but I think I'll use this as further evidence. As far as I'm concerned I'm just translating Tommy, I take no real credit. I'm just trying to write it in a more approachable manner, similar to the $\beta$-tetration.

We begin with the function,

$
A(s) = \frac{1}{\sqrt{\pi}}\int_{-\infty}^s e^{-x^2}\,dx = \frac{1}{\sqrt{\pi}}\int_{-\infty}^0 e^{-(s+x)^2}\,dx\\
$

Thereore,

$
\sum_{j=1}^\infty |A(s-j)| < \infty\\
$

$
\sum_{j=1}^\infty |e^{A(s-j)} - 1| < \infty\\
$

So that,

$
\sum_{j=1}^\infty ||e^{A(s-j)z} - 1 || < \infty\\
$

For a supremum norm across arbitrary compact sets $s \in \mathcal{S}$ and $z \in \mathcal{K}$. By my work in infinite compositions, this means that,

$
\text{Tom}(s) = \Omega_{j=1}^\infty e^{A(s-j)z}\,\bullet z\\
$

converges to a holomorphic function in $s \in \mathbb{C}$ and $z \in \mathbb{C}$, where the function is constant in $z$ so we can drop the variable. This function is very important, because it's an ENTIRE asymptotic solution to tetration. I've only ever managed to make an almost everywhere holomorphic asymptotic solution (a similar kind of function, but littered with singularities). This means that,

$
\log \text{Tom}(s+1) \sim \text{Tom}(s)\,\,\text{as}\,\,|s|\to\infty\,\,\text{while}\,\, |\arg(s)| < \pi/4\\
$

$
\text{Tom}(s+1) = e^{\displaystyle A(s)\text{Tom}(s)}\\
$

Now, there definitely is a secret lemma, something I haven't found, which makes all of this work. But for the moment. Remember that,

$
\log\beta_\lambda(s+1) \sim \beta_\lambda(s)\,\,\text{as}\,\,|s|\to\infty\,\,\text{while}\,\, |\arg(\lambda s)| < \pi/2\\
$

For my proof of the beta method, this is 90% of the work/requirements, in which we can describe a tetration,

$
F_\lambda(s) = \lim_{n\to\infty} \log^{\circ n}\beta_\lambda(s+n)\\
$

So assuming this is the crucial ingredient; this description is enough for,

$
\text{tet}_{\text{Tom}}(z+x_0) = \lim_{n\to\infty} \log^{\circ n}\text{Tom}(z+n)\\
$

Will, and definitely should converge.

My God Tommy! We're really opening pandora's box with these tetrations. I don't want to prove that this thing converges because I believe it's yours to show. But I can see it clearly! Fantastic work, Tommy! Absolutely Fantastic!

Deep Sincere Regards, James

PS

Absolutely fucking fantastic, Tommy!

Hmmmmmmmmmm, I think this might be the beta method in disguise the more I think about it,

$
y(s) = \log \text{Tom}(s+1) = A(s)\text{Tom}(s)\\
$

Then,

$
\log y(s+1) = y(s) + \log A(s+1)\\
$

So, I believe that the multiplicative/additive case should be isomorphic. So the question is more, how much does,

$
\log A(s+1)\,\,\text{looks like}\,\,-\log(1+e^{-s/\sqrt{1+s}})\\
$

If the error decays well enough; you are just reproducing the $\beta$ method, Tommy.

Hmmmmmm. This is still beyond beautiful.

Tommy, the more I think about it; this is the $\beta$ method. And I think I can prove it...
I've attached here a quick patchwork code of Tommy's Gaussian method. I'm currently compiling some graphs. I wrote this a little fast and loose, so it's only accurate for about 15 digits or so.

Code:
/* This is a quick write up of Tommy's method. It's very slow and I haven't optimized it yet. It's only good for about 15 digits. It's really rough around the edges but it's getting the job done. This seems a lot simpler to code than the beta method. I do still believe this is the beta method though; I'm making graphs to double check though */ Err(z) = {     (1+intnum(X=0,z,exp(-X^2)))/2; } /*these if statements are basically just to catch overflow errors and to exit if the values get too large*/ Tom(z) = {     my(val=0);     for(i=0,20,         if(abs(val) <= 1E4,             val = exp(Err(z-21+i)*val),             if(abs(val)<=1E8,                 val = exp(Err(z-21+i)*val)),                 return(val);             );         );     val; } /* This turns the multiplicative case of Tommy into the additive case I'm more used to*/ Conv_Tom(z) = {     Err(z)*Tom(z); } /*This is the error term between the converted Tommy function and tetration*/ tau(z,{count=0}) = {     if(real(Tom(z)) < 1E4 && count < 6,         count++;         log(1+tau(z+1,count)/Conv_Tom(z+1)) + log(Err(z+1)),         log(Err(z+1));     ); } /*This is the tetration function, it's not normalized yet, though */ Tet_Tom(z) = {        Conv_Tom(z) + tau(z); }
This is an example of it converging on the real line. I'll update this post with contour plots. Unfortunately the code is really slow, so it might be a day.

Unnormalized, the domain is $X \in [-1,3]$
[attachment=1544]

A slightly more accurate representation is given here, where you can discern it's a tetration function. Here, $X = [-1.5,2]$--again, it's not normalized.

[attachment=1545]

Here's a good amount of evidence that Tommy's method is holomorphic. Again, this code isn't perfect; alors, there are some exponent over flows in the process, which force the couple of hairs you see. This is over $0 \le \Re(z) \le 1$ and $-0.5 \le \Im(z)\le 0.5$--again, unnormalized. Please, ignore the hairs; we need a matrix add on to avoid this.

[attachment=1546]

Tommy's method is absolutely holomorphic!
Let f(h(s)) = exp(f(s)) for 4 < Re(s) << Im(s).

Now I wonder where the singularities are for h(s).

since h(s) = inv.f(exp(f(s)))

I consider ( as a subquestion ) solutions v such that f ' (v) = 0.

This matters for the " quality " of h(s) approximating s+1 when Im(s) =/= 0.

Although this might not be neccessary for a proof , it would help for the creation of multiple proofs.

In particular my view on a proof is there are basicly 2 kind ; " internal and external " ;

for 4 < Re(s) << Im(s) :
tet(s) = f(s) + error1(s)

or

for 4 < Re(s) << Im(s) :
tet(s) = f(s + error2(s))

where error2(s) is related to the questions about h(s).

I believe error2(s) implies error1(s) or in other words , the harder ( internal) proof implies the easier (external ) .

***

arg(h(s)) is also very interesting and usefull to understand.

in particular  for 4 < Re(s) << Im(s).

***

I think it also possible to prove (or decide ) the ( usual ) base change to be nowhere analytic since it strongly relates.

***

regards

tommy1729
(07/21/2021, 05:29 PM)tommy1729 Wrote: [ -> ]Let f(h(s)) = exp(f(s)) for 4 < Re(s) << Im(s).

Now I wonder where the singularities are for h(s).

since h(s) = inv.f(exp(f(s)))

I consider ( as a subquestion ) solutions v such that f ' (v) = 0.

This matters for the " quality " of h(s) approximating s+1 when Im(s) =/= 0.

Although this might not be neccessary for a proof , it would help for the creation of multiple proofs.

In particular my view on a proof is there are basicly 2 kind ; " internal and external " ;

for 4 < Re(s) << Im(s) :
tet(s) = f(s) + error1(s)

or

for 4 < Re(s) << Im(s) :
tet(s) = f(s + error2(s))

where error2(s) is related to the questions about h(s).

I believe error2(s) implies error1(s) or in other words , the harder ( internal) proof implies the easier (external ) .

***

arg(h(s)) is also very interesting and usefull to understand.

in particular  for 4 < Re(s) << Im(s).

***

I think it also possible to prove (or decide ) the ( usual ) base change to be nowhere analytic since it strongly relates.

***

regards

tommy1729

I meant arg(h(s) - s) is also very interesting and usefull to understand.

plotting h(s) - s would create a " visual proof ".

regards

tommy1729
For the external proof with error1(s) I have the idea to mainly use the absolute value.

To manage the imaginary parts , the idea is that we take the correct log branches in a consistant way and then we always take the same branch for the same neighbourhood ... thus a branch jump of at most 1 down or 1 up from the infinitesimal neigbourhood.

This leads to the partial error term o(L) ( little -o notation for absolute value bound )

where L satisfies L - 2pi = ln(L)

< follows from L = 2 pi + ln(2 pi + ln(2 pi + ... ) >

L = 8.4129585032844...

Notice that ln(1) is never 0 , it must be another branch.
ln(0) never occurs.

ln(z) for abs(z) < 1 is just - ln(1/z) and ln(ln(z)) = ln(- ln(1/z) ) = ln(ln(1/z)) + pi i
Just to show that the imaginary parts or error parts can also come from the (positive ) reals or the absolute values.

***

This gives the first estimate of tetration for suitable s ( Re(s) > 4 , Re(s) >> Im(s) )

lim n to +oo

abs(tet(s + s_0))  = abs ( ln^[n] ( Tom(s+n) ) ) < (o(A) + o(1/A)) * ( abs(Tom(s)) + abs(1/Tom(s)) + o(B) + o(L) + 1 ) + o(B) + o(L).

where o(L) is as above and

A = abs( t(s)*t(s+1)*t(s+2)*...*t(s+n) )

B = abs ( ln(t(s)) + ln(t(s+1)) +ln(t(s+2)) + ... + ln(t(s+n)) )

s_0 is the usual suitable constant.

As James noted A and B converge (fast) for n going to +oo so that is good.

this is a sketch of the proof.

notice v1 = exp(u) and v2 = exp( u * t(s)) have the same branch as inverse :

v1 = ln(u) , v2 = ln(u)^(1/t(s))

where the ln's have the same branches.

since t(s) is close to 1 and 1/t(s) valid , it follows v1 is relatively close to v2.

ln(v1) = ln ( ln(u) ) , ln(v2) = ln( ln(u) )/t(s)

ln(ln(v1)) = ln^[3](u) , ln(ln(v2)) = ln^[3](u) - ln(t(s)).

etc

ofcourse abs( a - b ) =< abs(a) + abs(b).

This should help you understand the proof , the branches and the absolute value error terms.

regards

tommy1729
You really don't have to go too much into depth in choosing your branch of logarithm. The principal branch is good enough if you add a $\rho$ function.

If you write,

$
\text{Tom}(s) = \Omega_{j=1}^\infty e^{A(s-j)z}\,\bullet z\\
$

And,

$
\text{Tom}_A(s) = A(s)\text{Tom}(s) = \log \text{Tom}(s+1)\\
$

And construct a sequence $\rho_n(s)$ where,

$
\rho^{n+1}(s) = \log(1+\frac{\rho^n(s+1)}{\text{Tom}_A(s+1)}) + \log A(s+1)\\
$

where $\frac{\rho^n(s+1)}{\text{Tom}_A(s+1)}$ is very small for large $\Re s$. So you are effectively calculating a $\log(1+\Delta)$ for $\Delta$ small, as opposed to a $\log(X)$ where $X$ is large. The branching won't be an issue at all.

Where then,

$
\text{tet}_{\text{Tom}}(s + x_0) = \text{Tom}_A(s) + \rho(s)\\
$

Remember Tommy that,

$
\lim_{\Re(s) \to \infty} \text{Tom}_A(s) \to \infty\\
$

Even though there are dips to zero this is still the asymptotic behaviour.
(07/22/2021, 02:21 AM)JmsNxn Wrote: [ -> ]You really don't have to go too much into depth in choosing your branch of logarithm. The principal branch is good enough if you add a $\rho$ function.

If you write,

$
\text{Tom}(s) = \Omega_{j=1}^\infty e^{A(s-j)z}\,\bullet z\\
$

And,

$
\text{Tom}_A(s) = A(s)\text{Tom}(s) = \log \text{Tom}(s+1)\\
$

And construct a sequence $\rho_n(s)$ where,

$
\rho^{n+1}(s) = \log(1+\frac{\rho^n(s+1)}{\text{Tom}_A(s+1)}) + \log A(s+1)\\
$

where $\frac{\rho^n(s+1)}{\text{Tom}_A(s+1)}$ is very small for large $\Re s$. So you are effectively calculating a $\log(1+\Delta)$ for $\Delta$ small, as opposed to a $\log(X)$ where $X$ is large. The branching won't be an issue at all.

Where then,

$
\text{tet}_{\text{Tom}}(s + x_0) = \text{Tom}_A(s) + \rho(s)\\
$

Remember Tommy that,

$
\lim_{\Re(s) \to \infty} \text{Tom}_A(s) \to \infty\\
$

Even though there are dips to zero this is still the asymptotic behaviour.

I take those dips very seriously.

If s is close to the real line that might work.

But for Im(s) substantially high things are more complicated I think.

For real s we can take the principal branch , in fact we must.
And then ofcourse by analytic continuation we do not need to bother with the other branches.

This indeed suggests we might not need not to look at other branches but for a proof I was pessimistic.

Let me put it like this ;

suppose we have tetration tet(s)

ln (ln ( tet(s+2) )) = tet(s) ONLY works with the correct branches.

example tet(s) = 1 + 400 pi i for some s.

regards

tommy1729
(07/22/2021, 12:13 PM)tommy1729 Wrote: [ -> ]
(07/22/2021, 02:21 AM)JmsNxn Wrote: [ -> ]You really don't have to go too much into depth in choosing your branch of logarithm. The principal branch is good enough if you add a $\rho$ function.

If you write,

$
\text{Tom}(s) = \Omega_{j=1}^\infty e^{A(s-j)z}\,\bullet z\\
$

And,

$
\text{Tom}_A(s) = A(s)\text{Tom}(s) = \log \text{Tom}(s+1)\\
$

And construct a sequence $\rho_n(s)$ where,

$
\rho^{n+1}(s) = \log(1+\frac{\rho^n(s+1)}{\text{Tom}_A(s+1)}) + \log A(s+1)\\
$

where $\frac{\rho^n(s+1)}{\text{Tom}_A(s+1)}$ is very small for large $\Re s$. So you are effectively calculating a $\log(1+\Delta)$ for $\Delta$ small, as opposed to a $\log(X)$ where $X$ is large. The branching won't be an issue at all.

Where then,

$
\text{tet}_{\text{Tom}}(s + x_0) = \text{Tom}_A(s) + \rho(s)\\
$

Remember Tommy that,

$
\lim_{\Re(s) \to \infty} \text{Tom}_A(s) \to \infty\\
$

Even though there are dips to zero this is still the asymptotic behaviour.

I take those dips very seriously.

If s is close to the real line that might work.

But for Im(s) substantially high things are more complicated I think.

For real s we can take the principal branch , in fact we must.
And then ofcourse by analytic continuation we do not need to bother with the other branches.

This indeed suggests we might not need not to look at other branches but for a proof I was pessimistic.

Let me put it like this ;

suppose we have tetration tet(s)

ln (ln ( tet(s+2) )) = tet(s) ONLY works with the correct branches.

example tet(s) = 1 + 400 pi i for some s.

regards

tommy1729

Absolutely Tommy!

But the branches are chosen by,

$
\log \beta (s+1) = \beta(s) + \text{err}(s)\\
$

So which ever branch satisfies this equation is the correct branch.

So, when you make a sequence of error functions $\lim_{n\to\infty} \tau^n(s)$; given by,

$
\tau^{n+1}(s) = \log(1+\frac{\tau^n(s+1)}{\beta(s+1)}) + \text{err}(s)\\
$

you are choosing these logarithms already; because they're the only ones which satisfy the equation,

$
\lim_{\Re(s) \to \infty} \tau^n(s) = 0\\
$

If you were to right it the naive way,

$
\lim_{n\to\infty} \log^{\circ n} \beta(s+n)\\
$

Then, yes, choosing the branch is necessary. But if you write it the former way; there's no such problem.

As to the dips to zero; they produce the most trouble when trying to numerically evaluate. But in the math; they don't pose that much of a problem because we know that $\lim_{\Re(s) \to \infty} \beta(s) = \infty$.
Pages: 1 2 3 4