Arguments for the beta method not being Kneser's method
#51
This discussion continues its focus on the 2pi i periodic version of beta(z,1), and the resulting Tet(z) function.  Lets center the following function so that the resulting Tet(0)=1, k~=1.74480157761534
\[\begin{align}
f_0(z)=\ln(\beta(z+1))=\beta(z)-\ln(1+\exp(-z))\\
f_n = \ln^{\circ n}f(k+n)\\
\text{tet}(z)=\lim_{n \to \infty}f_n(z+k)\\
\end{align} \]

Next, lets define a sequence of rho functions where
\[\begin{align}
\rho_0(z)=f_0(z)-\beta(z)=-\ln(1+\exp(-z))\\
\rho_n(z)=f_n(z)-f_{n-1}(z)\\
\tau_n(z)=\sum_{i=0}^{i=n}\rho_i(z);\;\;\;f_n(z)=\beta(z)+\tau_n(z)\\
\text{There are also some good recursive equations for rho_n}\\
\rho_n(z)=\ln\left(1+\frac{\rho_{n-1}(z+1)}{f_{n-2}(z+1)}\right)\\
\text{Tet}(z)=\beta(z+k)+\tau(z+k)=\beta(z+k)+\sum_{n=0}^{\infty}\rho_n(z+k)\\
\end{align} \]




So we are interested in \( \rho_4(z) \) and the earlier work, \( \rho_3(z) \); see my previous post #38 earlier in this thread for the graph of the zeros, and post#48 for the graph of f_3(z).

Here is a plot of \( \rho_3(z+k) \) centered at the origin for Tet(0), which shows a blow up of the nearest singularity for the \( \rho_3 \) iteration.
   

Here is a plot of the much quieter \( \rho_4(z+k) \) nearest singularity, which is actually a singularity wall.  The amplitude here at the singularity radius is on the order of 2*10-8, though of couse the logarithm is unbounded if you get close enough to the singularity.  The radius of convergence is approximately 0.03468; and the routine used to calculate these values is logrho(rr+z,4), which returns \( \ln(-\rho_4(z)) \).  You can see -exp(logrho) at the radius of convergence, where logrho has its largest value.  At 99% of the radius of convergence, the maximum of logrho is around -67700, so rho_4 is already vanishingly small; with a maximum amplitude of -exp(-67700) at 99% of the radius of convergence.  At the origin, \( \rho_4(0)\approx -\exp(-3814303.7) \).
   

What is most interesting to me is how "quiet" the rho4 Taylor series coefficients are.  I wrote a routine the approximates the log of the nth Taylor series coefficient, logcoeff(rr,n,4), accurate to a couple of decimal digits or so.  It works from about the 10th taylor series coefficient  up to the 1.3 millionth coefficient.  I didn't use this routine for rho3 since it would only work for the first 7 or 8 coefficients which are easily computable so I'm just approximating rho3 as \( a_n\approx -n\cdot\ln(0.48964) \), since we know the coefficients eventually grow as \( (\frac{1}{r})^n \).  So what I see is that rho4 starts out with Taylor series coefficients of around exp(~= -4000000), and all of the coefficients less than ~=570,000 have absolute value less than 1; and most of them are vanishingly small.  

The \( f_4(z) \) approximation does not change any of the first half million derivatives enough to show up in any normal computer representation for these numbers!  The crossover in magnitude doesn't occur until around the 670000th Taylor series coefficient, where the rho4 function's coefficients are finally larger in magnitude than the rho3 coefficients!

I haven't explained any of the equations behind the algorithm for logcoeff, and why it works; since that would represent a lengthy detour.  Mostly, one can see that the Tetration function at the origin has pretty much converged by the time you get to rho3, and the work I've done explains how the rho4 iteration of the conjectured nowhere analytic sum behaves.  From the recursive equation, we can derive that rho4 at the origin is "approximately" \( -\exp(-e\uparrow\uparrow 3) \) and rho5 would be an even tinier function whose amplitude at the origin would probably be "approximately" \( -\exp(-e\uparrow\uparrow 4) \), and rho5's first Taylor series coefficient with amplitude >1 would be for a value of n that is exponentially larger than for rho4.
   

.gp   beta_tau.gp (Size: 8.5 KB / Downloads: 254)
- Sheldon


Messages In This Thread
RE: Arguments for the beta method not being Kneser's method - by sheldonison - 10/19/2021, 02:43 AM

Possibly Related Threads…
Thread Author Replies Views Last Post
  tetration vs Tetration iterates vs tetration superfunctions.. preferred method? both? leon 0 270 10/10/2023, 01:33 PM
Last Post: leon
  The ultimate beta method JmsNxn 8 2,086 04/15/2023, 02:36 AM
Last Post: JmsNxn
  Artificial Neural Networks vs. Kneser Ember Edison 5 1,585 02/22/2023, 08:52 PM
Last Post: tommy1729
  greedy method for tetration ? tommy1729 0 510 02/11/2023, 12:13 AM
Last Post: tommy1729
  tommy's "linear" summability method tommy1729 15 3,824 02/10/2023, 03:55 AM
Last Post: JmsNxn
  another infinite composition gaussian method clone tommy1729 2 941 01/24/2023, 12:53 AM
Last Post: tommy1729
  Semi-group iso , tommy's limit fix method and alternative limit for 2sinh method tommy1729 1 880 12/30/2022, 11:27 PM
Last Post: tommy1729
  [MSE] short review/implem. of Andy's method and a next step Gottfried 4 1,742 11/03/2022, 11:51 AM
Last Post: Gottfried
  Is this the beta method? bo198214 3 1,574 08/18/2022, 04:18 AM
Last Post: JmsNxn
  Describing the beta method using fractional linear transformations JmsNxn 5 2,338 08/07/2022, 12:15 PM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)