Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
A first hires look at tetration \(\lambda = 1\) and \(b = e\)
#11
(10/26/2021, 07:03 PM)Ember Edison Wrote: 1. So which is more "analytic", the beta method or Kneser's method? I can't relate Sheldon's discovery to the complex plane

2. And do you have any idea to get beta method Super-logarithm and super-root?

3. I was very surprised by the robustness of the beta method, and I'm still narrowing down the base to see if the beta method crashes completely first, or if Pari-GP crashes first. Does the beta method have a limit when base -> 0? Can remove the singularity 0?

Well Kneser is definitely more analytic for \(b=1\); unless \(\lambda \to 0\) we get that \(\text{tet}_{\lambda,1}(s)\) converges well enough. I'm not sure at the moment if it will be as analytic; and my heuristics seem to imply IF it converges and is analytic, it'll probably actually be Kneser! But I'm not sure at the moment. When you start making \(\lambda = 0.0001\) or smaller; either we overflow; or branch out (though the branching is strict to the real-line and the essential singularities). The branching seems to start to isolate itself to the real line; and everywhere else we look fairly holomorphic and in a neighborhood of \(L\) or \(L^*\) our familiar fixed points. To such an extent, you can say \(\lim_{|s|\to\infty} \lim_{\lambda \to 0} \text{tet}_{\lambda ,1}(s) \to L\) for \(\pi/2 < \arg(s) < \pi\)--which would imply its Kneser per Paulsen & Cowgill.

But for the moment, if you take,

$$
F(s) = \text{tet}_{1,1}(s)\,\,\text{the}\, 2 \pi i\, \text{ periodic tetration base}\,e
$$

Then Sheldon has demonstrated rather clearly that it's \(\mathcal{C}^\infty\) on \(\mathbb{R}\) with a collection of branching points everywhere it starts to blow up. So it leaves me with the original statement I had in my paper; which is that,

$$
F(s)\,\,\text{is holomorphic on}\,\mathbb{C}/\mathcal{B}\\
$$

Where,

$$
\int_{\mathcal{B}} \,dA = 0\\
$$

Or that its points of discontinuities/branches/singularities are measure zero in \(\mathbb{R}^2\) using the standard Lebesgue area measure. I'm confident this is the best you can get; and I'm still confident in this result; I just assumed it'd be analytic on \(\mathbb{R}\). You can show this  result because the logarithmic singularities/branches cannot be dense in \(\mathbb{C}\)--because \(F(s) = \beta(s) + \tau(s)\) and you can remove the singularity at \(\Re(s) = \infty\) so that \(\tau(\infty) = 0\).  And from there it must converge in a neighborhood of \(\Re(s) = \infty\).

I had thought that the pullback would be better than it is--because small iterations of graphs looks very very good. For larger iterations, I thought my code was failing, and the pretty pictures were correct; but the truth is, when you make a very accurate graph you can see the branching problems quite clearly--as per the above graph.

Nonetheless; that is defunct towards the final theorem, which, at best is worded less than desirable and "almost right". I'm going to rework my paper a good amount; probably start from nearly scratch.

2.

The best I can give you is to invert the infinite composition and apply the inverse limit. So, let's do this real quick:

$$
\begin{align}

\beta(s) &= \Omega_{j=1}^\infty \frac{e^{bz}}{e^{\lambda(j-s)}+1}\,\bullet z\\
&= {\displaystyle \frac{e^{\displaystyle  \frac{be^{\displaystyle \frac{be^{\displaystyle ...}}{e^{\lambda(3-s)}+1}}}{e^{\lambda(2-s)}+1}}}{e^{\lambda(1-s)}+1}}
\end{align}
$$

Make the change of variables \(w = e^{\lambda s}\) to get \(g(w) = g(e^{\lambda s}) = \beta(s)\), which is holomorphic on the disk \(|w| < e^{\lambda}\) and fixes zero. Invert this to get \(g^{-1}(w)\) so that: \(g^{-1}(w) = e^{\lambda \beta^{-1}(s)}\).

Now all these operations aren't too computationally exhausting. From here we run the limit:

$$
\text{slog}(s+s_0) = \lim_{n\to\infty} \beta^{-1}(\exp^{n}(s)) - n\\
$$

This should converge to the appropriate limit on \(\mathbb{R}\), not sure about in the complex plane. There may be some trouble depending on how "holomorphic" the beta method is for each \(b\) and \(\lambda\).

3.

So to get the gist of the limit it's helpful to remember what the beta method is.

$$
\beta_{\lambda ,b}(s) = \Omega_{j=1}^\infty \dfrac{e^{bz}}{e^{\lambda(j-s)}+1}\,\bullet z\\
$$

And this thing converges everywhere that the sum,

$$
\sum_{j=1}^\infty \dfrac{e^{bz}}{e^{\lambda(j-s)}+1}\\
$$

Converges compactly normally. Now, this just means it converges on compact subsets with the supremum norm of each element of the sum. This instantly tells us \(\Re \lambda > 0\), as this is the dominant form of the series and gives it its geometric convergence. The value \(s\) just can't hit a singularity; which removes a countable amount of points. And in this sum, b really doesn't do anything....  So with this in mind, what happens when \(b=0\).  Well the expression looks like this:

$$
\Omega_{j=1}^\infty \dfrac{1}{e^{\lambda(j-s)}+1}\,\bullet z
$$

And since there is no z term, we get a "null" composition. There's no composition being made. So... this means that the only term involved is the first term and...:

$$
\beta_{\lambda,0}(s) = \frac{1}{e^{\lambda(1-s)}+1}\\
$$

And, additionally, it will be holomorphic in a neighborhood of \(b=0\), because the above sum converges compactly normally.

The only REAL trouble point is:

$$
\lambda = 0\\
$$

So now, if you want to talk about the trouble of \(^z 0\), remember that you are actually limiting \(b \to -\infty\); which is definitely an anomalous point. And in this case,

$$
\lim_{\Re(b) \to - \infty} \beta_{\lambda,b}(s) = 0\,\,\text{...sort of, this happens at least for}\,\lambda,s\in\mathbb{R}^+\\
$$

The key again, is where does:

$$
\lim_{\Re(b)\to-\infty} \sum_{j=1}^\infty \dfrac{e^{bz}}{e^{\lambda(j-s)}+1} = 0\\
$$

And filtering out what happens when \(z \approx 0\), obviously the final result depends on where z is negative or positive. And it can only get more complicated...

So at this point, it's safe to say this will be really whacky as you limit the appropriate variables.... But I don't really see any step of the method crashing unless we hit a branching point or we have s hitting a singularity or we are limiting \(\lambda \to 0\) or \(|b|\to\infty\).
Reply
#12
@sheldonison So we can rebuild Kneser's method tetration using the beta method? In any case, the numerical approximation algorithm of the beta method seems to be much better.


@JmsNxn You missed super-root. The robustness of the beta method seems well suited for generating super-root.
Reply
#13
(10/28/2021, 07:18 PM)Ember Edison Wrote: @sheldonison So we can rebuild Kneser's method tetration using the beta method? In any case, the numerical approximation algorithm of the beta method seems to be much better.


@JmsNxn You missed super-root. The robustness of the beta method seems well suited for generating super-root.
Hey Ember,

Beta itself is a well behaved analytic function, but the resulting tetration function may not be.  The critical question for the Beta tetration function is when is it analytic.  Se this thread and subsequent posts concerning Beta tetration for base "e", that show that it is nowhere analytic, and the Taylor series does not converge.  Then in a sense the tetration base "e" function is only defined at the real axis, and then you can't "rebuild Kneser's tetration" from the beta method.

James and I are still actively investing base sqrt(2) now.
- Sheldon
Reply
#14
(10/28/2021, 07:18 PM)Ember Edison Wrote: @sheldonison So we can rebuild Kneser's method tetration using the beta method? In any case, the numerical approximation algorithm of the beta method seems to be much better.


@JmsNxn You missed super-root. The robustness of the beta method seems well suited for generating super-root.

Absolutely no idea how to generate the super root. I'd have to think on that...

And, Sheldon and I disagree at a certain point. He claims that \(\text{tet}_{\lambda}(z)\) is nowhere analytic on \(\mathbb{R}\) and I agree. I do not agree that it's nowhere holomorphic; which he believes is probably the case. And secondly, the Kneser tetration would only come about as \(\lambda \to 0\), and that's hell to calculate--pretty much impossible. So I have to create a better, more rigorours well thought out argument, because numerical calculations are impossible. But, as I said, as \(\lambda \to 0\) we get faster and faster convergence towards \(L,L^*\), and I think it has a chance at being holomorphic. If it is, it's Kneser per Paulsen & Cowgill.

We see this in the Shell-Thron region too, where as \(\lambda \to 0\) we go faster and faster to the attractive fixed point of \(\log\). For \(\sqrt{2}\) each iteration shoots us to 4 incredibly fast the smaller \(\lambda \) is, and then we have to sort of push it forward by renormalizing each step.

I think this'll happen similarly for e as \(\lambda \to 0\).

In such a sense, we're looking at:

$$
\begin{align}

F_n(s) &= \log^{\circ n} \beta_{\lambda_n}(s+n + p_n)\\
F_n(0) &= 1\\
\lambda_n &= \mathcal{O}(n^{-\delta})\\
\end{align}
$$

And looking to show that \(F_n \to \text{tet}\). Where at worst \(p_n = O(n^{1-\delta})\) for \(1 \ge \delta > 0\). But it's still up in the air at the moment. And Sheldon has some great counter numerical evidence!
Reply
#15
I've been making some different graphs of \(\beta\) and I got a good one to share.

Here is \(\beta_{1,1}(s)\) for \(-5 \le \Re(s) \le 10\) and \(-7.5 \le \Im(s) \le 7.5\):

   

And here's \(\beta_{1+i,1+i}(s)\), it looks super cool. The overflows, again, are mapped to zero. We see a really cool fractal pattern in this one. The domain of \(s\) is the same:

   

This is \(e^{1+i} = b\) with multiplier \(\lambda = 1+i\). And here's \(\lambda =1\) and \(b = e^{1/2}\):


   

All of these will be committed towards the asymptotic thesis of the beta function. Which is that the beta method approaches at least an asymptotic expansion at each point, as opposed to a Taylor series. This is compatible with everything I have been saying, and additionally compatible with Sheldon's work. This paper will entirely focus on ASYMPTOTIC behaviour. Which looks like tetration; but if you try and make it tetration, expect a good amount of errors.
Reply
#16
(11/05/2021, 05:25 AM)JmsNxn Wrote: I've been making some different graphs of \(\beta\) and I got a good one to share.

...
All of these will be committed towards the asymptotic thesis of the beta function. Which is that the beta method approaches at least an asymptotic expansion at each point, as opposed to a Taylor series. This is compatible with everything I have been saying, and additionally compatible with Sheldon's work. This paper will entirely focus on ASYMPTOTIC behaviour. Which looks like tetration; but if you try and make it tetration, expect a good amount of errors.

very nice.  I spent a bit more time on b=sqrt(2), and I believe it converges it can be proven to converge analytically so long as the imaginary period of lambda is less than the imaginary period of the attracting fixed point=2.  So lambda=1 would be analytic, but lambda=0.3 would not converge, since
\(\frac{2\pi i}{0.3}>\frac{2\pi i}{\ln(\ln(2))}\)
- Sheldon
Reply
#17
(11/08/2021, 10:52 PM)sheldonison Wrote:
(11/05/2021, 05:25 AM)JmsNxn Wrote: I've been making some different graphs of \(\beta\) and I got a good one to share.

...
All of these will be committed towards the asymptotic thesis of the beta function. Which is that the beta method approaches at least an asymptotic expansion at each point, as opposed to a Taylor series. This is compatible with everything I have been saying, and additionally compatible with Sheldon's work. This paper will entirely focus on ASYMPTOTIC behaviour. Which looks like tetration; but if you try and make it tetration, expect a good amount of errors.

very nice.  I spent a bit more time on b=sqrt(2), and I believe it converges it can be proven to converge analytically so long as the imaginary period of lambda is less than the imaginary period of the attracting fixed point=2.  So lambda=1 would be analytic, but lambda=0.3 would not converge, since
\(\frac{2\pi i}{0.3}>\frac{2\pi i}{\ln(\ln(2))}\)

I've been noticing something similar. Have you tried introducing constants \(p_n = \mathcal{O}(n^{1-\delta})\) for \(\lambda = 0.3\) (I'm working on trying to find a good estimate for \(\delta\) but it's \(>0\))? I believe the standard iteration:

$$
\log^{\circ n} \beta_{0.3}(s+n) \to 4\,\,0 < |\Im(s)| < 2 \pi/0.3\\
$$

Rather quickly; and on the real line we get all the branch cut nonsense of tetration \(s \in (-\infty,-2)\); but I've been seeing some nice results if you use:

$$
\log^{\circ n} \beta_{0.3}(s+n+p_n)\\
$$

Where we assume that:

$$
\log^{\circ n} \beta_{0.3}(n+p_n) = 1\\
$$

I haven't been able to do this efficiently yet, but for the small amount of cases I've managed this seems to encourage a better convergence. The way I think about it, is that:

$$
\beta_{0.3}(s+1) = \exp_{\sqrt{2}}(\beta(s))/(1+e^{-0.3s})\\
$$

Doesn't behave well enough like \(\exp_{\sqrt{2}}^{\circ n}(s)\) and moves too close to \(4\) and the \(\log\)'s just push us too rapidly to \(4\)--so we need a kind of function \(p_n\) which compensates, and pushes us closer to \(2\) so that the \(\log\)'s don't take over and push us to \(4\) too fast. No less than we need to accelerate the convergence of \(\beta\) by some sequence of constants \(p_n = \mathcal{O}(n^{1-\delta})\).

Where for cases like \(b = e\) and \(\lambda = 1\) we get that \(\delta \ge 1\), but for some cases \(0 < \delta < 1\). In which, we have to over compensate to get the result.
Reply
#18
(11/08/2021, 10:52 PM)sheldonison Wrote:
(11/05/2021, 05:25 AM)JmsNxn Wrote: I've been making some different graphs of \(\beta\) and I got a good one to share.

...
All of these will be committed towards the asymptotic thesis of the beta function. Which is that the beta method approaches at least an asymptotic expansion at each point, as opposed to a Taylor series. This is compatible with everything I have been saying, and additionally compatible with Sheldon's work. This paper will entirely focus on ASYMPTOTIC behaviour. Which looks like tetration; but if you try and make it tetration, expect a good amount of errors.

very nice.  I spent a bit more time on b=sqrt(2), and I believe it converges it can be proven to converge analytically so long as the imaginary period of lambda is less than the imaginary period of the attracting fixed point=2.  So lambda=1 would be analytic, but lambda=0.3 would not converge, since
\(\frac{2\pi i}{0.3}>\frac{2\pi i}{\ln(\ln(2))}\)

Success!!!

I've been able to prove this unequivocally. I'll write a quick sketch.

To begin, define:

$$
\beta_{\lambda,\mu}(s) = \Omega_{j=1}^\infty \frac{e^{\mu z}}{1+e^{\lambda (j-s)}}\,\bullet z\\
$$

And for simplicity, we'll stick to \(\mu = \log(2)/2\) and let \(\lambda\) vary. We want to show that for \(\Re \lambda \le -\log \log 2\) the iterated \(\log\)'s will no longer converge. To start, take our sequence of approximations:

$$
\tau^{n+1}(s) = \log(1+\frac{\tau^n(s+1)}{\beta(s+1)}) - \log(1+e^{-\lambda s})\\
$$

Where the \(\log\)'s are base \(e^{\mu}\). I can show the following result if requested; but each:

$$
\tau^{n}(s+k) = \mathcal{O}(e^{-\lambda k})\\
$$

And consequently, by looking at the above equation; and rearranging; for \(n \ge 2\) we must have:

$$
\frac{\tau^n(s+k)}{\mu\beta(s+k)} = \mathcal{O}(e^{-\lambda k})\\
$$

And again, by the above asymptotic,

$$
\frac{1}{\mu\beta(s+k)} = \mathcal{O}(1)\\
$$

And now, let's call this bound \(A_\mu\). For the case of \(\mu = \log(2)/2\), we get the bound \(A_\mu = 1/\log(2)\). Now, since all of these things decay exponentially; we can linearize all of this discussion. So everywhere you have a \(\log(1+x)\), replace it with \(\frac{x}{\mu}\) which are asymptotic equivalents as \(x \to 0\). From here we can make a new \(\tau\), let's call it \(\widetilde{\tau}\), which effectively just removes the \(\log\)'s with this asymptotic.

To reconcile:

$$
\limsup_{k\to\infty}\left|\frac{1}{\mu \beta(s+k+1)}\right| = A_\mu\\
$$

$$
\widetilde{\tau}^{n+1}(s) = \frac{\widetilde{\tau}^n(s+1)}{\mu \beta(s+1)} - \frac{e^{-\lambda s}}{\mu}\\
$$

Which has the closed form expression:

$$
\widetilde{\tau}^{n+1}(s) = -\sum_{j=0}^n \frac{e^{-\lambda(s+j)}}{\mu^{j+1} \prod_{c=1}^j \beta(s+c)}\\
$$

Now, necessarily:

$$
\left|\frac{e^{-\lambda(s+j)}}{\mu^{j+1} \prod_{c=1}^j \beta(s+c)}\right| \le e^{-\Re(\lambda s) + (\log A_\mu - \Re\lambda)j}\\
$$

Where we've let \(\Re(s)\) be arbitrarily large for the bound to come in effect.  And voila, this series only converges if \(\Re\lambda > \Re \log A_\mu\), and for \(\mu = \log(2) / 2\), the value \(\Re \log A_\mu = -\log \log (2)\)!!!!!!!!!!!!


You will see something very similar everywhere in the Shell-Thron region; where essentially \(A_\mu = \frac{1}{\mu \omega}\), where \(\omega\) is the fixed point. This will cause the \(\tau\) sequence to be very irregular as soon as \(\Re \lambda \le -\Re \log \mu \omega\). But at \(\lambda = -\log \mu \omega\) is precisely when we achieve the period of the standard Schroeder iteration!

Makes so much sense now!

This is just a quick sketch at the moment, but for the \(\sqrt{2}\) case \(\beta\) has really nice behaviour; so it's a proof for this case; for other cases you have to be more careful when talking about domains; especially when you make this change to a linear approximation.

Regards, James

Now, to solve this for \(\Re \lambda < \log \log (2)\), we have to introduce convergents \(p_n\) to make this sum converge; still not certain how to do this effectively as of yet. I won't go into too much detail; but I now understand why base \(e\) and \(\lambda = 1\) results in no where analytic on the real line. This is harder to prove; but is god damned fascinating to think about.
Reply
#19
I thought I'd add another photo dump post. These are all of inverse Abel functions for \(e^{\mu z}\) with period \(2\pi i / \lambda\).  These are all as hi res as I could; and also as convenient.

base \(\mu = 1\); multiplier \(\lambda = 0.25\):

   

base \(\mu = 0.3+i\); multiplier \(\lambda = 1\):

   

base \(\mu = 1+i\); multiplier \(\lambda = 1+i\):

   

And similarly: base\(\mu = 1+i\); multiplier \(\lambda=1\):

   

I believe I understand where and when Sheldon's gauntlet of zeroes appear; and when they appear they cause branch cuts; evident in pretty much all these photos.
Reply
#20
Your pictures of \( \mu = \lambda = 1+i \), The four slashes inside the image look like the function has crashed.

I also recently tested that in base=\(  e^{10^{24}} \) or init(1,1E24,1000), the beta function would crash.

init(1,1E-24,1000) will also crash.


I also created images of a circle scanned in the complex plane with a radius of 1/1E16/1E-16. The images of the real axis still need some time.

[Image: uc?export=view&id=1wXZ9C4Y-pgzLjUoCkxss-A4xxkPpi6Gh]
[Image: uc?export=view&id=1Bx0v4id8M7cCfGfVYZWHkNVqzhR2OmSq]
[Image: uc?export=view&id=1x0ne8oi2-nEDlYaP7_XZBaqzxxGuIxkO]


 

This is Shell-Thron region base.  
[Image: uc?export=view&id=1lDc9VVs_fSX4KPtHKvkzCZJiFWxbRFS2]



real axis:


[Image: uc?export=view&id=1OAC9gxzcblz39xwJOxdKZjDTYr8srlEj]
Reply




Users browsing this thread: 1 Guest(s)