Reducing beta tetration to an asymptotic series, and a pull back
#1
I thought I'd point out the way I've been programming lately; and the fact that my method of programming doesn't work perfectly. Because pari-gp protocols don't allow exponentials larger than \( \exp(1\text{E}6) \); I've had to shortcut my code, as the recursive process hits values much higher than that before becoming perfectly accurate. For that reason I'm beginning to alter my approach.

My first way of doing this was by creating a matrix add on, where we calculate the taylor series of \( \varphi \). This means we create a small INIT.dat file which stores all the \( a_k \),

\(
\beta_\lambda(s) = \sum_{k=1}^\infty a_k e^{k\lambda s} \,\,\text{for}\,\,\Re(s) < 1\\
\)

The second breakthrough came by when looking at Tommy's Gaussian method. Which, Tommy's method admits no exponential series like this. But what starts to happen for large values is something magical. I never noticed it before, but it's exactly what I was missing.

Recall that,

\(
\log\beta_\lambda(s+1) = \beta_\lambda(s)-\log(1+e^{-\lambda s})\\
\)

And that,

\(
\tau_\lambda^{n}(s) = \log^{\circ n} \beta_\lambda(s+n) - \beta_\lambda(s)\\
\)

which satisfy the recursion,

\(
\tau_\lambda^{n+1}(s) = \log(1 + \frac{\tau_\lambda^n(s+1)}{\beta_\lambda(s+1)}) - \log(1+e^{-\lambda s})\\
\)

Now, for large \( s \) the value,

\(
\log(1 + \frac{\tau_\lambda^n(s+1)}{\beta_\lambda(s+1)}) = \frac{\tau_\lambda^n(s+1)}{\beta_\lambda(s+1)} + \mathcal{O}(\frac{\tau_\lambda^n(s+1)}{\beta_\lambda(s+1)})^2\\
\)

But even better, this asymptotic works so long as \( \beta_\lambda(s+1) \) is large. But \( \beta_\lambda \) is an asymptotic solution to tetration; \( \beta_\lambda(4) \) is already astronomical. So let's just scrap the \( \log(1+x) \) altogether! This gives us the chance to make an asymptotic approximation that always works.

\(
\tau_\lambda^1(s) = -\log(1+e^{-\lambda s})\\
\tau_\lambda^{n+1}(s) \sim \frac{\tau_\lambda^{n}(s+1)}{\beta_\lambda(s+1)} -\log(1+e^{-\lambda s})\\
\)

BUT THIS JUST PRODUCES AN ASYMPTOTIC SERIES!!!!!!!!!!!!!!!!!!!

So effectively it constructs an asymptotic solution to the \( \beta \)-method; pulling back with log's is even easier now. We can choose the depth of the asymptotic series. This is very reminiscent of Kouznetsov; but again, this isn't Kneser's solution.


I'll derive the asymptotic series here.  This is mostly important for bypassing errors with iterated logarithms; and furthermore; it is meant for high speed calculation of the \( \beta \)-method far out in the complex plane.

Starting with the recursion,



\(
\rho_\lambda^1(s) = -\log(1+e^{-\lambda s})\\
\rho_\lambda^{n+1}(s) = \frac{\rho_\lambda^{n}(s+1)}{\beta_\lambda(s+1)} -\log(1+e^{-\lambda s})\\
\)

We get that,

\(
\rho_\lambda(s) = \lim_{n\to\infty} \rho_\lambda^n(s) = -\sum_{k=0}^\infty \dfrac{\log(1+e^{-\lambda(s+k)})}{\prod_{j=1}^k \beta_\lambda(s+j)}\\
\)

which satisfies the functional equation,

\(
\rho_\lambda(s) = \frac{\rho_\lambda(s+1)}{\beta_\lambda(s+1)} -\log(1+e^{-\lambda s})\\
\)

Which means that,

\(
\tau_\lambda \sim \rho_\lambda\,\,\text{as}\,\,\Re(s) \to \infty\\
\)

Which further means, that,

\(
\text{tet}_\beta(s+x_0) = \beta_\lambda(s) + \tau_\lambda(s) \sim \beta_\lambda(s) + \rho_\lambda(s)\,\,\text{as}\,\,\Re(s) \to \infty\\
\tau_\lambda \sim \rho_\lambda\\
\)



This is very important because \( \rho_\lambda = \tau_\lambda \) upto a hundred digits at about \( \Re(s) > 5 \); but \( \rho \) is so much easier to calculate. Furthermore, I believe this method will work with Tommy's Gaussian choice; where we swap \( -\log(1+e^{-\lambda s}) \) with \( \log(A(s+1)) \) where,

\(
A(s) = \frac{1}{\sqrt{\pi}}\int_{-\infty}^s e^{-x^2}\,dx\\
\)

And further more; since all of these choices are asymptotic to each other: Tommy's Gaussian method equals the beta method!


I'm planning to set this in stone in my next paper. Construct an arbitrarily accurate asymptotic to tetration (a la beta).

Regards, James


tl;dr

\(
\tau_\lambda(s) = -\sum_{k=0}^n \frac{\log(1+e^{-\lambda(s+k)})}{\prod_{j=1}^k\beta_\lambda(s+j)} + \mathcal{O}(1/\beta_\lambda(s+n+1))\,\,\text{as}\,\,\Re(s) \to \infty\\
\)
#2
That seems correct and logical (imo).

Originally I considered taking the lambertW function for approximations.
Not sure if that still relates much.

However a tiny remark.

division by tet(s) is not necc small , even for Re(s) large ; because tet(s) can be close to 0 !

This (chaos) complicates matters , despite perhaps still true ... not so trivially ...

regards

tommy1729
#3
(07/21/2021, 05:48 PM)tommy1729 Wrote: That seems correct and logical (imo).

Originally I considered taking the lambertW function for approximations.
Not sure if that still relates much.

However a tiny remark.

division by tet(s) is not necc small , even for Re(s) large ; because tet(s) can be close to 0 !

This (chaos) complicates matters , despite perhaps still true ... not so trivially ...

regards

tommy1729

You're correct, Tommy.

But if \( \Im(s) = A \) is fixed, eventually \( \beta_\lambda(s) \to \infty \) as \( \Re(s) \to \infty \) And they aggregate to the orbit \( 0,1,e,e^e,e^{e^e}... \). So EVENTUALLY on each line it will be tiny. That's more so what I meant. But if we vary \( \Im(s) \) while we vary \( \Re(s) \) that's where the trouble happens.

Regards


Possibly Related Threads…
Thread Author Replies Views Last Post
  Searching for an asymptotic to exp[0.5] tommy1729 206 453,682 06/29/2023, 07:53 PM
Last Post: tommy1729
  The ultimate beta method JmsNxn 8 2,086 04/15/2023, 02:36 AM
Last Post: JmsNxn
  Divergent Series and Analytical Continuation (LONG post) Caleb 54 13,286 03/18/2023, 04:05 AM
Last Post: JmsNxn
  Discussion on "tetra-eta-series" (2007) in MO Gottfried 40 9,828 02/22/2023, 08:58 PM
Last Post: tommy1729
  Is this the beta method? bo198214 3 1,574 08/18/2022, 04:18 AM
Last Post: JmsNxn
  Describing the beta method using fractional linear transformations JmsNxn 5 2,339 08/07/2022, 12:15 PM
Last Post: JmsNxn
Question Tetration Asymptotic Series Catullus 18 6,441 07/05/2022, 01:29 AM
Last Post: JmsNxn
Question Formula for the Taylor Series for Tetration Catullus 8 4,463 06/12/2022, 07:32 AM
Last Post: JmsNxn
  The beta method thesis JmsNxn 9 4,311 04/20/2022, 05:32 AM
Last Post: Ember Edison
  Trying to get Kneser from beta; the modular argument JmsNxn 2 1,594 03/29/2022, 06:34 AM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)