Thread Rating:
  • 1 Vote(s) - 3 Average
  • 1
  • 2
  • 3
  • 4
  • 5
The beta method thesis
#1
Hey, everyone!

Although I haven't been too active on the forum of late; it's with very good reason. Sheldon and I have done a bunch of zoom calls and a bunch of experiments to draw out exactly how the \(\beta\) method works. From it, we've derived a whole plethora of results and limitations of the beta method. Like for instance, I've been able to show that the \(\beta\) method can at best create an asymptotic series for \(b > e^{1/e}\). This means all the fancy graphs for these bases, are at best tetration up to a \((s-s_0)^m\) term for arbitrary \(m\) and \(s_0\). So that, they cannot be holomorphic. Additionally, this shows that they are exactly \(\mathcal{C}^\infty\) on the real line.

This led me to devote the majority of the paper towards the Shell-Thron region. It is in this region that we see holomorphy much more obviously, and additionally, we can "move the period around", with only creating different fractals in the graph. Whereby, for example if \(b = \sqrt{2}\), we can be holomorphic in the right half plane only if the period of the tetration is \(-2\pi i/\log \log(2)\). We can shrink this period, but if we do we gain a bunch of branching/singularities.

I've been able to EXACTLY determine where and when a \(\beta\) function can create a tetration for arbitrarily \(\lambda, b = e^{\mu}\). Where again, the period of this tetration will be \(\ell = 2 \pi i/ \lambda\). To the extent that a natural boundary appears as \(2 \pi i/\lambda \to P\), where \(P\) is the period of the regular iteration. Whereby regular iteration I mean the \(\sqrt{2}\) tetration done through the Schroder function with trivial theta mapping. I have a lot to share, and much of it settles the questions we've had about the \(\beta\) method.

Ironically enough, this does not disprove the earlier paper I had written, it shows that there are some errors in it (largely with the assumption of holomorphy instead of asymptotic series)--but the main result is largely still possible (it just needs to be reworded). And how it needs to be reworded is pretty simple; we can approximate Kneser with \(\beta\) functions, but we cannot create a new tetration. Additionally it needs to be done more carefully than I had done. This portion will not appear in the paper; instead I focus specifically on understanding where the \(\beta\) method can be holomorphic (or produce an asymptotic series) with arbitrary period.

This report is clocked at 90 pages at the moment; and I'm just polishing up the final chapter which compares possible other beta methods and relates them to the beta method itself. Which is intended to describe how the beta method with period \(\ell\) sits with other periodic solutions of tetration with period \(\ell\). Which is basically just a controlled discussion of \(\theta\) mappings.

There are still some loose bolts in this paper. There are still things I may not have made air-tight. But I can make it all air-tight. Sheldon and I have talked and worked through a lot of code; and every result I've displayed is backed by hard empirical data. If some results appear a tad under-explained.


I have incorporated every piece of the \(\beta\) method and have fleshed out every piece we've discussed here. This is pretty much the summation of all my research precluding to infinite compositions and tetration. I have included all the coding which confirms every facet of the paper as well. I will upload the code soon; I just haven't gotten around to double and triple checking everything works--and writing a readme.txt file.

I am also waiting to finish the second part of the appendix, which is Sheldon's analysis of the case \(\lambda=1,\,\mu =1\) (\(b=e\)). This is mostly just a large empirical justification of one of the large results in this paper. 

Anyway, as I'm mostly just fixing typos at this point, I figure it'd be okay to post the pdf! 

Thanks a lot for the support guys! I hope this paper meets the standard of the forum!


.pdf   Asymptotic_Solutions_Of_The_Tetration_Equation_In_The_Style_Of_Sterling.pdf (Size: 7.05 MB / Downloads: 99)

EDIT:

I've attached the link to the github library; it has a readme which breaks down how to use the program more. I am trying to add a helper protocol but I'm not sure what would be the right way to do it; I'm not too much of a fan of Sheldon's.

https://github.com/JmsNxn92/The-Beta-Method-Thesis

I made the code more conducive to grabbing taylor series as well; forgot to add some code in the one I uploaded. For that reason I'm reuploading and attaching to the main post; and deleting the post I made at the beginning of this thread.

I've also made a thread in computing to store this code; it can be found here.

The readme can be found there also;

Regards, James
Reply
#2
Toungue 
I've been playing "Elden Ring" for the past few days, so I haven't been on the forums to check the latest news. Anyway, fuck the war, fuck COVID, and peace to all of you.

And then yes congratulations on finally finishing your thesis!


I'm showing the conclusions of my cursory reading here, and I hope I haven't missed key information.

1. The Weak Fatou set is the complement of the Weak Julia set.
2. beta method is holomorphic on Weak Fatou set, and Weak Julia set is not.
3. beta method work well on Shell-Thron region (or at least its interior). 
3.1. It can induce a unique Kneser's Tetration. 
3.2. And weak Julia set is measure zero.
4. Every periodic solution to the tetration equation which is not the period of the regular iteration must have singularities in the right half plane other than the singularities of the regular iteration

And then I have some questions in case I missed something:

Q1. What happens to beta method if we close to the S-T region from the outside of the S-T region?
Q2. What happens to beta method when imag(base) < 0, In particular, minimum real part for Shell-Thron Region, and -1 ≤ base < 0
Q3. I did not find the beta method Super-logarithm in the paper.
Q4. Is the beta method not sufficiently holomorphic to induce the Super-root?


The last one is purely a question of entertainment:
Q6. What is the behavior of the \( \int \beta (z)dz  \) and \( \int tet_{\beta} (z)dz  \)?

Oh, I know a lot of people hate solving indefinite integral problems, I'm just kidding. Because all versions of Tetration's indefinite integrals don't look like they would be members of the Mellin/Fox H function.
Reply
#3
(03/21/2022, 07:40 PM)Ember Edison Wrote: I've been playing "Elden Ring" for the past few days, so I haven't been on the forums to check the latest news. Anyway, fuck the war, fuck COVID, and peace to all of you.

And then yes congratulations on finally finishing your thesis!


I'm showing the conclusions of my cursory reading here, and I hope I haven't missed key information.

1. The Weak Fatou set is the complement of the Weak Julia set.
2. beta method is holomorphic on Weak Fatou set, and Weak Julia set is not.
3. beta method work well on Shell-Thron region (or at least its interior). 
3.1. It can induce a unique Kneser's Tetration. 
3.2. And weak Julia set is measure zero.
4. Every periodic solution to the tetration equation which is not the period of the regular iteration must have singularities in the right half plane other than the singularities of the regular iteration

And then I have some questions in case I missed something:

Q1. What happens to beta method if we close to the S-T region from the outside of the S-T region?
Q2. What happens to beta method when imag(base) < 0, In particular, minimum real part for Shell-Thron Region, and -1 ≤ base < 0
Q3. I did not find the beta method Super-logarithm in the paper.
Q4. Is the beta method not sufficiently holomorphic to induce the Super-root?


The last one is purely a question of entertainment:
Q6. What is the behavior of the \( \int \beta (z)dz  \) and \( \int tet_{\beta} (z)dz  \)?

Oh, I know a lot of people hate solving indefinite integral problems, I'm just kidding. Because all versions of Tetration's indefinite integrals don't look like they would be members of the Mellin/Fox H function.








Hey, Ember!





You've essentially got it. The \(\beta\) method is holomorphic on the weak Fatou set; which is similar to the Fatou set, but related to the \(\beta\) function. Essentially where the orbits of the \(\beta\) function \(\beta(s+n)\) escapes to infinity (weak Julia) and where it doesn't (weak Fatou). On the weak Julia set, you can still uncover arbitrary precision though (outside of the singularities), but it'll be nowhere analytic. Despite all the derivatives still converging pointwise, the power series diverges. I haven't proven we can induce Knesers, but all numerical evidence is pointing to yes. It's the same idea I've been saying in taking the limit \(\lim_{\lambda \to 0} \lim_{n\to\infty} \beta_\lambda(s+n)\). For regular iteration on the Shell-thron region (by regular iteration, I mean Schroder/Ecalle iteration), you can get it by limiting the period to the period of the regular iteration. You've gotten the main central points.




I'd add though, the thesis focuses a lot on the asymptotic angle, which provides three different asymptotic theorems for \(\beta\); each stronger than the last.





For Q1: If you take the boundary of the Shell-thron region it behaves very similarly to the interior of the Shell-Thron region; but it causes us to except a couple of the properties on the interior. For example, you can still construct tetration for \(\mu = 1/e, b = e^{1/e}\). The beauty of this is the fact the multiplier of the fixed point \(\omega = e\) is \(\mu \omega = 1\). And the beta method is holomorphic when \(\Re(\lambda) > -\log|\mu \omega|\)--which means \(\Re\lambda\) must be greater than zero... So clearly this condition implies tetration at base \(\mu = 1/e\) can be made for all periods.




To talk about the barebones, which is where there's a weak fatou set and where there isn't, is really all you have to do to uncover where these tetrations are holomorphic.



So take \(\Re\lambda > 0\), and take \(\beta_\lambda\); then everywhere \(\lim_{n\to\infty} \beta_\lambda(s+n) \to e\) is the weak Fatou set. Now you have to analyse the linearized form of \(\tau\) which is:




$$



\begin{align}



\tau^1(s) &= -e^{1-\lambda s}\\



\tau^{n+1}(s) &= \frac{e \tau^{n}(s+1)}{\beta(s+1)} - e^{1-\lambda s}\\



&= -\sum_{j=0}^n \frac{e^{j+1 - \lambda(s+j)}}{\prod_{c=1}^j \beta(s+c)}\\



\end{align}



$$




Where this series converges, is everywhere \(\Re \lambda > 0\), so tetration works everywhere on the weak Fatou set (excluding if there are branch cuts, or logarithmic singularities, but again they are \(\mathbb{R}^2\) measure zero). A similar event happens for all values on the border of the Shell-Thron region. The trouble is, it can be a little more chaotic. In this case, I very much doubt that the weak julia set is measure zero, it may be larger, and probably looks more like the weak Julia sets outside of the Shell-Thron region, which are much more robust. For \(\mu = 1, b = e\) for example it's all of \(\mathbb{C}\). This is because we have a neutral fixed point on the boundary; for that reason I didn't talk too much about the boundary, and lumped it in with the general case.



Q2:



So in the lower part of the Shell-Thron region, everything behaves in conjugacy to the upper part. This means that \(\mu \mapsto \mu^*\), then \(\beta_{\lambda,\mu^*}(s) = \beta_{\lambda^*,\mu}(s^*)^*\), and similarly with the tetration. It's very well behaved. Also, I spend a good amount of time double checking that everything worked the same for \(e^{-e} < b < 1\), and if it still produced a viable Shell-Thron tetration... it did. For \(0 < b < e^{-e}\), things get very whacky. It's important if you use the function to use large amount of polynomials/iterations to deal with these cases. And when you are at about \(1E-24\) or something ridiculous, use init_OFF, which uses a linear substitution so that the program can still initialize the polynomial. Everything works the same, but the Weak Julia set starts to dominate and we achieve an almost trivial area for the Weak Fatou set.



Q3:



Yes, I still have a mental block for the super logarithm. It is perfectly possible to create a super logarithm. The trouble is I have nothing new to say about it. I don't have a way of constructing it from scratch, other than by just doing a polynomial inversion. For that reason I didn't bother including anything about it, as it would ultimately just be invert the taylor series of the super exponential ... Sad



Q4:



As to the super-root. Again, haven't given it much thought. I don't have anything clever to say about it at the moment, so I left it out...



Q5:



As I don't know what the indefinite integrals are, I thought I'd share a fact about infinite compositions for generating the further derivatives. For example, I'll do \(\beta'\).



$$

\begin{align}

\beta'(s+1) &= \frac{d}{ds} \frac{e^{\mu \beta(s)}}{1+e^{-\lambda s}}\\

&= \mu\beta'(s)\beta(s+1) + e^{\mu \beta(s)}\frac{\lambda e^{\lambda s}}{(1+e^{\lambda s})^2}\\

&= \mu\beta'(s)\beta(s+1) + \frac{\lambda \beta(s+1)}{1+e^{\lambda s}}\\

\end{align}

$$



Therefore,



$$

\beta'(s) = \Omega_{j=1}^\infty \beta(s+1-j)\left(\mu z+\frac{\lambda}{1+e^{\lambda (s-j)}}\right)\bullet z\\

$$



Because this is the unique function to satisfy the above functional equation and the limit \(\lim \beta'(s) = 0\) as \(s \to - \infty\). Similarly we can make a difference equation for \(\beta^{(n)}(s)\) in terms of \(\beta,\beta',...,\beta^{(n-1)}\). And the solution of which, must be a linear infinite composition (all that appears is one linear function in \(z\) in the infinite composition).  This means \(\beta\) satisfies a delay differential equation, and so, integrating \(\beta\), would fall under the integral of a solution to a delay differential equation. You can actually brute force the integral \(\int\beta\) from here, but I really don't want to do it. If you're that interested though... I can begrudgingly write it up.





Lol! I hope you're having fun playing Elden Ring. Can't imagine what it's like in Europe right now. I agree, fuck covid and this nonsense war. Ffs! I'm not a big Elden Ring guy, but I've been playing Metroid Dread for 5 months straight, fucking love this game!



Sincere Regards, James





Point 1: I thought I'd describe how you can recover kneser. If you take \(\lambda = 0.001\) and \(\mu = 1, b = e\), and you graph near where its zero you get this:



   



All of the singularity problems start to cluster closer and closer to the real line. If I were to graph higher up here, you'll see a nice sheet of green with zero singularities or cuts. As you shrink the multiplier this becomes more and more drastic. Essentially the real line becomes an eventual border between the upper fixed point and the lower fixed point; and orbits of their neighborhoods. To get kneser you have to do a bit of mapping after this, but it is essentially this argument.



To get regular iteration, here's a side by side of \(\lambda =0.01 -\log\log(2)\), and the beta tetration next to the regular tetration for \(b = \sqrt{2}\):



   

   



Point 2: Here's a graph of \(\lambda = 1\) and \(\mu = 1/e, b = e^{1/e}\); you can see we still have a large area of holomorphy, looks pretty good to be honest.



   



And here's a graph testing the functional equation using just the taylor series:



   



It's certainly analytic on the weak fatou set.
Reply
#4
Toungue 
1. ok, I think I should be more concerned about what's happening in this area beta method:

   

2. If a base happens to be in the S-T region, then,   

Code:
u = exp(2*Pi*I/period); 
t = exp(u);
base = exp(u/t);

I can't understand why you wrote Is_Shell_Thron() in such a complicated way...

3. I think the most surprising fact about "recover kneser" is that the beta method works fine when the base is very close to 1, or even equal to 1. Fatou.gp is very poor at this problem. (and \( e^{-e} < b < 1 \) )

4. I'm just worried that there are dynamics on the inverse function that we don't understand yet, and that it would be best to perform a check if the numerical approximation is easy.

In contrast we always do not understand the dynamics and numerical computation of Tetration's indefinite integral, but the \( \int\beta \) doesn't look so bad
Reply
#5
I was just playing metroid dread, because you made playing elden ring sound so perfect.

I agree entirely with every point you've said. But, each of my results are very unconventional. I called a program and ran a graph about 4 hours ago, and now I have the graph of \(\mu = - e\). Let \(\lambda = 1\):

   



But I really want to add, the program is not as true as the thesis. You have to massage the code sometimes, and the only way to massage the code, is if you are doing the math. And the math is in the 100 or so pages of the thesis. The code is all good and all, but its exactly coded like the THIRD asymptotic theorem. Nonetheless, this is a tetration holomorphic on \(\mathbb{C}\) upto a measure zero set in \(\mathbb{R}^2\).

I wish I could explain it better Ember, but I can't. You just have to read the thesis in more detail.


PLUS I am running graphing protocols for the area you just suggested. Stay tuned for updates.

Quote:2. If a base happens to be in the S-T region, then,   

Code:
Code:
u = exp(2*Pi*I/period); 

t = exp(u);

base = exp(u/t);

I can't understand why you wrote Is_Shell_Thron() in such a complicated way...

Honestly.... I've never seen that before. That definitely saves time, jesus christ. But I still need a way of getting the fixedpoint or the period, and Is Shell Thron is a quick lazy way to get that. Given: \(\log(2)/2\) I still need a way to find that the period is \(2\pi i/\log\log(2)\)...

I guess I could write a protocol if its in the interior of the jordan curve \(e^{i\phi-e^{i\phi}}\), it's in Shell Thron, but then I'd have to run Lambert-W function protocol to get the fixed point. Honestly sounds like more work. I write everything using recursion, rofl, and I like it that way cause so much extra stuff gets involved otherwise.
Reply
#6
(03/23/2022, 08:37 AM)JmsNxn Wrote: Honestly.... I've never seen that before. That definitely saves time, jesus christ. But I still need a way of getting the fixedpoint or the period, and Is Shell Thron is a quick lazy way to get that. Given: \(\log(2)/2\) I still need a way to find that the period is \(2\pi i/\log\log(2)\)...

I guess I could write a protocol if its in the interior of the jordan curve \(e^{i\phi-e^{i\phi}}\), it's in Shell Thron, but then I'd have to run Lambert-W function protocol to get the fixed point. Honestly sounds like more work. I write everything using recursion, rofl, and I like it that way cause so much extra stuff gets involved otherwise.

For solving numerically, you can try using Pari-GP's solve(), but I feel it's idiotic, like requiring too much for initial values.

The 12 extreme values I solved for in the S-T region are hardcoded inside the gp file.
.gp   STRBase.gp (Size: 25.72 KB / Downloads: 35)
I don't have the energy to try Pari-GP's solve(), and Wolfram/Maple's numerical solve is too comfortable.



Lambert-W function can be used with lambertw().
Reply
#7
Hey, so I did run some protocols on the cusp of the Shell-Thron region. It does get a little whacky. But on the main strip it works very well. You should note here, that the glitchy areas are actually a plethora of branch cuts and mixing problem between the lower/upper planes. I will definitely have to figure out how to code this better, it looks a little off. But still the branch cuts are measure zero under \(\mathbb{R}^2\), but definitely something is going on here I'll have to examine.

Here is, \(b = \exp(-0.1-e+0.1i)\) for \(\lambda = 1\):

   

Here is, \(b = \exp(0.3-e+i)\) for \(\lambda = 1\):

   

Here is, \(b = exp(0.1-e+0.5i)\) for \(\lambda = 1\):

   

The trouble is probably forcing a real period on a tetration that has a naturally complex period. I'll run some more tests, and see if I can modify some of the parameters to work around these TV static pictures. It could also be related to the fact these graphs were made with very low precision, so it could be a precision error.

Regards, James


EDIT:

So definitely the TV static in the above pictures is low precision error. I'm rerunning the graphs with higher precision/higher series precision, and the TV static is totally disappearing, forming an actual detail.

My suggestion would then be, when dealing with the left cusp of the Shell-Thron region, make sure you use enough precision/series precision.


EDIT2:

So it took wayyyy longer to run but here is: \(b = exp(0.1-e+0.5i)\) and \(\lambda = 1\) . I did about 350 iterations/series precision and 50 digit precision. We get something way nicer. Way slower to compile, I made a smaller graph for that reason but it works far better:

   


So all the TV static can be fixed by higher precision/series precision. But the low-precision graphs won't work in the cusp of the area you asked me to test. You have to run at least mid level precision to work in this area. Be prepared for really long wait times though. This is definitely a case of it works; but it's slow as fuck.

Regards, james



All the above graphs are done over \(0 \le \Re(s) \le 6\) and \(|\Im(s)| \le 3\); recalling that we have an exact period of \(2\pi i \approx 6i\), we've basically graphed the entire behaviour of the function.
Reply
#8
I suggest to set init_OFF's g(|u|)=log(abs(u)), fine-tuning this function to other values (like log(z)^n) makes little sense, the adjustable range of n is very small.

init_OFF(1, k)   log(u)^n
10.0^±24        n>0.5
10.0^±25        n>0.5
10.0^±26        n>0.9
10.0^±10^2    0.97 ≤ n ≤ 1.4

log(z) can to work under
Code:
/* 256GB, x86-64 Pari-GP Stack limit */
default(parisize, a1=274877906944); init_OFF(1, 10.0^10^9)

There is also a small problem: Is_Julia() does not work in scenarios where init(1, 10.0^-10) is used, let alone scenarios where init_OFF is needed
Reply
#9
(04/18/2022, 05:55 PM)Ember Edison Wrote: I suggest to set init_OFF's g(|u|)=log(abs(u)), fine-tuning this function to other values (like log(z)^n) makes little sense, the adjustable range of n is very small.

init_OFF(1, k)   log(u)^n
10.0^±24        n>0.5
10.0^±25        n>0.5
10.0^±26        n>0.9
10.0^±10^2    0.97 ≤ n ≤ 1.4

log(z) can to work under
Code:
/* 256GB, x86-64 Pari-GP Stack limit */
default(parisize, a1=274877906944); init_OFF(1, 10.0^10^9)

There is also a small problem: Is_Julia() does not work in scenarios where init(1, 10.0^-10) is used, let alone scenarios where init_OFF is needed

I'll look into it, I have a list of things to program and fix in the next iteration of this program. I thought I had already set it to log(abs(u)), but I must've forgotten to change it--I know you pointed this out before.

Is_Julia is pretty wonky for extreme values; it's difficult to implement this better. But I have added some changes already which helped with a quite a few inaccuracies. I'm still fiddling with the program to make it more stream lined.

I've also kind of depreciated and retired the protocol init_off, because I figured out how to do an automatic flag so that all you need is init. Still in a beta stage though, haven't had enough time to work on the program.
Reply
#10
(04/19/2022, 01:17 AM)JmsNxn Wrote: Is_Julia is pretty wonky for extreme values; it's difficult to implement this better. But I have added some changes already which helped with a quite a few inaccuracies. I'm still fiddling with the program to make it more stream lined.

Oh, and I don't think we need to improve the Is_Julia() extreme values much. at 10.0^±10^n I expect to see white screens as output soon.

Is_Julia() is interesting in itself. I have been studying init(1, log(1 + r * (-1)^(n/30))) for almost a month and Is_Julia() has the slowest area growth when 15<n<27.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Tommy's Gaussian method. tommy1729 34 8,601 06/28/2022, 02:23 PM
Last Post: tommy1729
  Trying to get Kneser from beta; the modular argument JmsNxn 2 445 03/29/2022, 06:34 AM
Last Post: JmsNxn
  tommy beta method tommy1729 0 528 12/09/2021, 11:48 PM
Last Post: tommy1729
  Calculating the residues of \(\beta\); Laurent series; and Mittag-Leffler JmsNxn 0 574 10/29/2021, 11:44 PM
Last Post: JmsNxn
  The Generalized Gaussian Method (GGM) tommy1729 2 1,192 10/28/2021, 12:07 PM
Last Post: tommy1729
  Arguments for the beta method not being Kneser's method JmsNxn 54 14,738 10/23/2021, 03:13 AM
Last Post: sheldonison
  tommy's singularity theorem and connection to kneser and gaussian method tommy1729 2 1,108 09/20/2021, 04:29 AM
Last Post: JmsNxn
  Why the beta-method is non-zero in the upper half plane JmsNxn 0 712 09/01/2021, 01:57 AM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 1,325 07/22/2021, 03:37 AM
Last Post: JmsNxn
  Improved infinite composition method tommy1729 5 2,231 07/10/2021, 04:07 AM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)