Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Arguments for the beta method not being Kneser's method
#51
This discussion continues its focus on the 2pi i periodic version of beta(z,1), and the resulting Tet(z) function.  Lets center the following function so that the resulting Tet(0)=1, k~=1.74480157761534
$$\begin{align}
f_0(z)=\ln(\beta(z+1))=\beta(z)-\ln(1+\exp(-z))\\
f_n = \ln^{\circ n}f(k+n)\\
\text{tet}(z)=\lim_{n \to \infty}f_n(z+k)\\
\end{align} $$

Next, lets define a sequence of rho functions where
$$\begin{align}
\rho_0(z)=f_0(z)-\beta(z)=-\ln(1+\exp(-z))\\
\rho_n(z)=f_n(z)-f_{n-1}(z)\\
\tau_n(z)=\sum_{i=0}^{i=n}\rho_i(z);\;\;\;f_n(z)=\beta(z)+\tau_n(z)\\
\text{There are also some good recursive equations for rho_n}\\
\rho_n(z)=\ln\left(1+\frac{\rho_{n-1}(z+1)}{f_{n-2}(z+1)}\right)\\
\text{Tet}(z)=\beta(z+k)+\tau(z+k)=\beta(z+k)+\sum_{n=0}^{\infty}\rho_n(z+k)\\
\end{align} $$




So we are interested in and the earlier work, ; see my previous post #38 earlier in this thread for the graph of the zeros, and post#48 for the graph of f_3(z).

Here is a plot of centered at the origin for Tet(0), which shows a blow up of the nearest singularity for the iteration.
   

Here is a plot of the much quieter nearest singularity, which is actually a singularity wall.  The amplitude here at the singularity radius is on the order of 2*10-8, though of couse the logarithm is unbounded if you get close enough to the singularity.  The radius of convergence is approximately 0.03468; and the routine used to calculate these values is logrho(rr+z,4), which returns .  You can see -exp(logrho) at the radius of convergence, where logrho has its largest value.  At 99% of the radius of convergence, the maximum of logrho is around -67700, so rho_4 is already vanishingly small; with a maximum amplitude of -exp(-67700) at 99% of the radius of convergence.  At the origin, .
   

What is most interesting to me is how "quiet" the rho4 Taylor series coefficients are.  I wrote a routine the approximates the log of the nth Taylor series coefficient, logcoeff(rr,n,4), accurate to a couple of decimal digits or so.  It works from about the 10th taylor series coefficient  up to the 1.3 millionth coefficient.  I didn't use this routine for rho3 since it would only work for the first 7 or 8 coefficients which are easily computable so I'm just approximating rho3 as , since we know the coefficients eventually grow as .  So what I see is that rho4 starts out with Taylor series coefficients of around exp(~= -4000000), and all of the coefficients less than ~=570,000 have absolute value less than 1; and most of them are vanishingly small.  

The approximation does not change any of the first half million derivatives enough to show up in any normal computer representation for these numbers!  The crossover in magnitude doesn't occur until around the 670000th Taylor series coefficient, where the rho4 function's coefficients are finally larger in magnitude than the rho3 coefficients!

I haven't explained any of the equations behind the algorithm for logcoeff, and why it works; since that would represent a lengthy detour.  Mostly, one can see that the Tetration function at the origin has pretty much converged by the time you get to rho3, and the work I've done explains how the rho4 iteration of the conjectured nowhere analytic sum behaves.  From the recursive equation, we can derive that rho4 at the origin is "approximately" and rho5 would be an even tinier function whose amplitude at the origin would probably be "approximately" , and rho5's first Taylor series coefficient with amplitude >1 would be for a value of n that is exponentially larger than for rho4.
   

.gp   beta_tau.gp (Size: 8.5 KB / Downloads: 19)
- Sheldon
Reply
#52
Hey Sheldon,

Do you mind if refer to your error terms as \(\rho\) as opposed to \(\tau\). The reason being, I had considered a similar \(\rho\) beforehand; yours is algebraically more clever though. I tried to reduce it into a sum of error terms; and I had tried it with the use of the variable \(\rho\) as opposed to \(\tau\). This is much more consistent with my notation when doing infinite compositions. Where we compound errors as \(\sum_j \rho_j\). In such a sense, I've reserved \(\rho\) for compositions mapped to additions. Which keeps, in tone, with a lot of my previous papers.

That is to say:

$$
\begin{align}
\rho^0_\lambda(s) &= -\log(1+\exp(-\lambda s))\\
\tau^n_\lambda(s) &= \sum_{j=0}^{n-1} \rho_\lambda^j(s)\\
\end{align}
$$

I had considered these earlier; but couldn't make heads or tails of it. I never thought:

$$
\begin{align}
\rho_\lambda^n(s) = \log\left(1+\frac{\rho_\lambda^{n-1}(s+1)}{\beta_\lambda(s+1) + \tau_\lambda^{n-1}(s+1)}\right)\\
= \log\left(1+\frac{\rho_\lambda^{n-1}(s+1)}{\beta_\lambda(s+1) + \sum_{j=0}^{n-2} \rho_\lambda^j(s+1)}\right)
\end{align}
$$

Which is the quintessential speed up you are employing.



Nonetheless, is it okay if we refer to these as \(\rho\) as opposed to \(\tau\)? Because I have some good asymptotics of \(\rho\) if we talk about it like this. To me, \(\tau\) is the direct recursion and \(\rho\) is reducing it into a summation. Upon which; I have many tools to handle this sum asymptotically. And it fits very well with the notation I used to prove \(\beta\) is holomorphic in the first place. It makes the notation more consistent.

To me, notationally, \(\rho\) implies we are creating a summative bound of a sequence of compositions. To me \(\rho\) means a bounding map from \(\Omega \to \sum\).

Regards, James
Reply
#53
(10/20/2021, 05:13 AM)JmsNxn Wrote: Hey Sheldon,

Do you mind if refer to your error terms as \(\rho\) as opposed to \(\tau\)...
Hey James,

That is done.  I redid all of post#51 with the changes from tau to rho, including the diagrams, and the updated pari-gp beta_tau.gp program.  In the text I added the tau/rho relationship too.  The updated pari-gp program beta_tau.gp is also in  post#51  and that beta_tau.gp code applies to this post as well.
$$\begin{align}
\tau_n(z)=\sum_{i=0}^{n}\rho_i(z)\\
\end{align}$$

This post attempts to explain how I made the graph for the approximation for a_n for the first million or so rho_4 Taylor series coefficients.  Lets start with the pari-gp program rho(z,n), where I'm most interested in rho(rr,4).  Here are some of my assumptions
  • rho(rr(,4) has a nearest singularity, approximate radius of convergence 0.034681
  • within that radius of convergence rho(rr,4) is non-zero, and has a logarithm
  • Approximately the first 1332000 taylor series coefficients are negative for even coefficients and positive for odd coefficients
  • it is much easier to do calculations with logrho(z,4) than rho(z,4)
  • From Taylor series n=10...1332000 we can approximate the behavior on a circle of radius r(n) as Gaussian.  More pictures and details will follow below.
  • above for a_n; n~>1332000, the maximum is no longer on the real axis, and the maximum switches to a complex conjugate pair of maximums that approach the singularity wall as n grows larger; these coefficients can probably still be approximated by this pair of points, maybe up to 6000000..7000000, I haven't studied the maximum yet, but as we approach the singularity other approximation techniques are required.  Eventually, you can just use the radius of convergence approximation ...
Lets plot logrho(rr+z,4) at about 40% of the radius of convergence using r= 0.01384115  We want to take the exponent of this function to see if we can learn more about rho_4 ...  We are plotting logrho(rr+0.01384115*exp(t*I)) where t varies from 0..2pi
   

But the real(z) is humongously negative; with a maximum value of  -2086301.8 at logrho(rr-0.01384115,4).  This is a hugely negative number that we want to take the exponent of...  So we could subtract that huge negative by plotting as follows:
exp(logrho(rr-0.0173405*exp(t*I))-logrho(rr-0.0173405,4)).  Graphing from -pi to pi, we would have a spike at 0.  We could zoom into the spike by plotting from -0.01, to +0.01.  We're making progress, but it would still be a high frequency mess.

So, we would scale the graph by exp(-1220000*t*I)!!  That's because at this radius, the graph is dominated by the coefficient for a_n*x^1220000!!  And then we get this beautiful approximately Gaussian distribution.  Here we graph from t=pi-0.01, to pi+0.01, -exp(logrho(rr+0.01384115+exp(t*I),4)*exp(-1220000*t*I)!!
   

The Gaussian approximation is so good, that it is probably accurate to several decimal digits for the envelope approximation of 0.0006076.  So how do we calculate the rn=1220000?  And how do we calculate the area of the Gaussian envelope?

This is the pari-gp equation I use.
logrho(rr+exp(log(-0.01384115)+x+O(x^3)),4)
and the output is as follows, where the polcoeff of the zero term is the magnitude reported earlier.  The first derivative is approximately 1220000.  And the 2nd derivative is the x^2 coefficient, which doubles when we take the derivative.
-2086301.83826486 + 1219999.96585252*x + 215528.202158871*x^2 + O(x^3)
$$\text{envelope} \approx \frac{1}{\sqrt{4\cdot a_2\pi}}\approx 0.0006076$$
$$\text{logcoff}(rr,rn,4)\approx\ln{|b_n|}\approx  \Re(a_0) - rn\cdot\Re(a_1)+ \ln(\text{env})\approx 3135424.03$$

The logcoeff routine uses Newton's method to quickly find the optimal radius which is dominated by the b_n Taylor series coefficient, by calling logrho iteratively until the derivative matches the desired r_n value.  The logcoeff is returned as a vector; z=logcoeff(rr,dn,4); z[1].  The envelope approximation term was 0.06076% of the value of rho(rr-0.01384115) so the other 99.94% of the value of rho comes from nearby Taylor series coefficients.  Here is the approximately Gaussian distribution r_n from 1215000 to 1225000, or 1220000+/-5000 to see each b_n's contribution at the radius of interest.  Since the infinite Fourier series of a Gaussian is ... another Gaussian, we might expect that these two graphs are both approximately Gaussian.  Notice that the b_n for n=1220000 make the approximately largest contribution, and that is the maximum at approximately 0.0006076.  By the time we get to coefficient 1215000 or 1225000, the contribution is less than 1.8*10^-16, and the sum over these 10000 coefficients is very nearly 1, also as expected.
   
- Sheldon
Reply
#54
Very god damned fascinating, Sheldon. I'm going to have to read back up on Gaussian quadrature and all that nonsense (because I'm pretty sure that's what you're using; just forget the right word for it). I never cared for any of the math behind Gaussian speed ups, and clever integral representations, because I always focused on algebraic representations.

I'm a little dumbfounded by how you are calculating logrho so fast about the singularity--but it makes sense for the most part. We are centering at a value \(\log\rho^n(z_0) = 0\) which has a convergence radius for r; and then we sample a circle about the radius. Then we do some gaussian magic (lol, this is what I need a better explanation of; but it's definitely a symptom of my lack of understanding Gaussian/Riemann quadrature integral speed up black magic!)

I've already started going into salvage mode. I'm looking at how well of an asymptotic approximation this beast really is. And I think I might have an alternative approach which best describes how the asymptotic scenario really works. And I think, much of this is becoming avoidable; but only when \(\lambda\to 0\) and we do it cleverly enough.

I'm going to work on the claim \(\text{tet}_\lambda(s)\) is holomorphic everywhere on \(\mathbb{C}\) upto a set \(\mathbb{E}_\lambda\) in which:

$$
\int_{\mathbb{E}_\lambda} 1\cdot dA = 0\\
$$

Which is a better way of saying my original claim. And second of all; fortifying it as more of an asymptotic solution. Last of all, making sure the same argument works for all bases/multipliers. This still allows for singularity walls; weird fuck ups; and all sorts of taylor series shenanigans. This doesn't affect the first 30 pages of my paper too much; just requires me to choose better language.

I'm still on the fence on the last part of my paper; but I feel that as \(\lambda \to 0\) the set \(\mathbb{E}_\lambda \to (-\infty,-2]\) and \(\text{tet}_\lambda \to \text{tet}_K\). All my numbers and thoughts and proof sketches point towards a normality in the left half plane as we limit \(\lambda \to 0\). This is in tune with the later parts of my paper; but I definitely need to write this out deeper.

Also, I'm beginning to understand why \(\mathbb{R} \subset \mathbb{E}_\lambda\) in a good topological way. I can't really explain this yet; the words are on the tip of my tongue; but I don't have them just yet.

Regards, James



Essentially, I've begun looking at \(q(z) = \text{tet}_1(\log(z))\) which satisfies \(q(e\cdot z)=e^{q(z)}\)--where no such analytic solution can exist on \(\mathbb{R}\). This function is holomorphic almost everywhere for \(\Im(z) > 0\)--but that's all we can say. If we try to compare the difference along the border \(\Im(z) >0 \) and \(\Im(z)< 0\) we get a buch of fractal errors. A similar result holds for \(q_\lambda(e^\lambda \cdot z) = e^{q_\lambda(z)}\); with the normality conditions I have; they must be non-analytic on \(\mathbb{R}\). The only language I can think of that's equivalent; is that we are asking the Schroder functions about the fixed point \(L,L^*\) to magically agree on \(\mathbb{R}\). You and I both know that's nonsense.

But! \(q_\lambda(z)\) will be somewhat holomorphic for \(0 < \Im(z) < 2\pi i/\lambda\); with fractal properties near the boundary.

BUT! as \(\lambda \to 0\) this equation already diverges. And we're asking a divergent Schroder function to equal a divergent Schroder function on the real line. This has much more luck; wayyyyyyy more luck; seeing as this thing still converges.

To such an extent that as \(\Re(z) \to - \infty\) while \(\lambda\to0\) we approach holomorphic functions for \(\Im(z) >0\) and \(\Im(z) < 0 \)--but they agree on the real line. And they look like kneser because as \(|z| \to \infty\) for \(\pi/2 < \arg(z) <\pi\) the function \(\lim_{\lambda \to 0} \text{tet}_\lambda(z) \to L\)--and similarly in the lower half plane. This is a uniqueness condition per Paulsen & Cowgill.

Additionally the more I graph the solutions as \(\lambda \to 0\) we decay to the fixed points \(L,L^*\) geometrically with \(\lambda\). And if you thought the singularities quiet. I suggest looking at mult = 0.001 in my code (making sure to do about 1000 iterations); we have a bunch of fractals near \(\mathbb{R}\) but we get a huge area of convergence towards \(L\). I think it's because we're pushing closer and closer to Kneser.

Regards.
Reply
#55
(10/22/2021, 03:54 AM)JmsNxn Wrote: ...  fascinating, Sheldon.... I'm a little dumbfounded by how you are calculating logrho so fast about the singularity--but it makes sense for the most part.

Hey James,

Now lets define a function \(\text{logrho}(z)=\ln(-\rho(z))\) where I'll use the shorthand notation \(l\rho(z)\) for the remainder of this post.  Lets start with the following from my previous post, again this is for the 2pii periodic beta(z,1).

$$\begin{align}
f_0(z)=\beta(z)-\ln(1+\exp(-z));\;\;\; f_n(z) = \ln^{\circ n}f(z+n)\\
\rho_0(z)=-\ln(1+\exp(-z))\\
\rho_n(z)=\ln\left(1+\frac{\rho_{n-1}(z+1)}{f_{n-2}(z+1)}\right)\\
\end{align}$$

Now lets change the recursive equation for \(\rho\) to a recursive equation for \(l\rho\)
$$\begin{align}
l\rho_0(z)=\ln\Big(\ln\big(1+\exp(-z)\big)\Big)\\
l\rho_n(z)=\ln\left(-\ln\left(1+\frac{\rho_{n-1}(z+1)}{f_{n-2}(z+1)}\right)\right)\\
l\rho_n(z)=\ln\left(-\ln\left(1+\frac{-\exp(l\rho_{n-1}(z+1))}{\exp(f_{n-1}(z))}\right)\right)\\
l\rho_n(z)=\ln\bigg(-\ln\Big(1-\exp\big( l\rho_{n-1}(z+1) - f_{n-1}(z)  \big) \Big) \bigg)\\
\end{align} $$


Next I implemented in pari-gp a routine I called loglogmexp(z) which implements the following:
$$\begin{align}
\text{loglogmexp}(y)=\ln\Big(-\ln\big(1-\exp(y)\big)\Big)\\
l\rho_n(z)=\text{loglogmexp}\big( \rho_{n-1}(z+1) - f_{n-1}(z)\big);\;\;\; y=\rho_{n-1}(z+1)-f_{n-1}(z)\\
\end{align}$$

Now, often times \(\Re(y)\) is large enough negative, that we can replace the inner most \(-\ln\big(1-\exp(y) \big)\) with the approximation of: \(\exp(y)\)!!  If we are closer to the singularity then I implemented either a more exact series, or else directly implemented the exponents and logarithms.  But for n=4, for most cases this is an extremely accurate approximation.  This approximation is accurate to >=~60 decimal digits at a radius of less than 99.998% of the radius of convergence! 
$$\begin{align}
l\rho_n(z) \approx   l\rho_{n-1}(z+1) - f_{n-1}(z)  \\
l\rho_n(z) \approx  \ln\Big(\ln\big(1+\exp(-z-n)\big)\Big)-\sum_{i=1}^{n}f_{i-1}(z+n-i)\\
\end{align} $$
edit and update: The equation above is dominated by \(f_0(z+n-1)\) or if centering at Tet(0), \(e\uparrow\uparrow(z+n-1)\).  In my program, I call f(z,n), beta_tau(z,n).  You can see the individual contributions, by running "logrho_n(rr,4)" instead of logrho(rr,4).  
Code:
z=logrho_n(rr,4);
 -5.74639913386489   log(log(1+exp(-z-4)))
 -3814279.10476022  -beta_tau(z+3,0)
 -15.1542622414793  -beta_tau(z+2,1)
 -2.71828182845905  -beta_tau(z+1,2)
 -1.00000000000000  -beta_tau(z+0,3)
z=-3814303.72370342;

.gp   beta_tau.gp (Size: 8.79 KB / Downloads: 17)
- Sheldon
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  Tommy's Gaussian method. tommy1729 24 3,673 11/11/2021, 12:58 AM
Last Post: JmsNxn
  Calculating the residues of \(\beta\); Laurent series; and Mittag-Leffler JmsNxn 0 127 10/29/2021, 11:44 PM
Last Post: JmsNxn
  The Generalized Gaussian Method (GGM) tommy1729 2 333 10/28/2021, 12:07 PM
Last Post: tommy1729
  tommy's singularity theorem and connection to kneser and gaussian method tommy1729 2 432 09/20/2021, 04:29 AM
Last Post: JmsNxn
  Why the beta-method is non-zero in the upper half plane JmsNxn 0 312 09/01/2021, 01:57 AM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 693 07/22/2021, 03:37 AM
Last Post: JmsNxn
  Improved infinite composition method tommy1729 5 1,212 07/10/2021, 04:07 AM
Last Post: JmsNxn
  Generalized Kneser superfunction trick (the iterated limit definition) MphLee 25 8,052 05/26/2021, 11:55 PM
Last Post: MphLee
  Alternative manners of expressing Kneser JmsNxn 1 865 03/19/2021, 01:02 AM
Last Post: JmsNxn
  A different approach to the base-change method JmsNxn 0 710 03/17/2021, 11:15 PM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)