Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Arguments for the beta method not being Kneser's method
#41
(10/07/2021, 03:18 AM)JmsNxn Wrote: If this is a nowhere analytic tetration; it's probably the weirdest fucking nowhere analytic function! Lmao!!! Definitely one for the history books! If I can grab an asymptotic series at every point... and it's not analytic? That makes this even more interesting tbh! Definitely doesn't help dethrone Kneser or anything, but it's certainly really really fucking cool!

I would like to see a computationally efficient biholomorphic tetration in my lifetime, but I don't seem to be close to this goal in the past few years. Confused
Reply
#42
(10/07/2021, 05:20 AM)JmsNxn Wrote: As pari has already proved to be rather unreliable with most of my calculations; and no less with the fact we need values of the form 1E450000 or so, to get accurate readouts taylor series wise; and pari overflows. This being; why Kneser is so god damned good. It displays normality conditions in the upper and lower half planes. Beta requires us to get closer and closer to infinity to get a better read out.

But we don't have many computer algebra systems left to choose from. I don't think Julia or python can do a better job on large numbers.  Undecided
Reply
#43
(10/07/2021, 05:20 AM)JmsNxn Wrote: All of the numbers you've posted evaluate small, but non-zero; no matter the depth of iteration I invoke. There also seems to be no branch-cuts after your singularities...

AHHH I see much more clearly. I think you are running into a fallacy of the infinite though.

(to begin, that should be , though (I'm sure it's a typo on your part).)

... My diagnosis of the singularities is loss of accuracy in the sample points of beta... And furthermore, straight up artifacts.

James,
I'm trying to understand your concerns.  It is true that I was focused exclusively on the zeros of .  At each of the points I listed, beta(z) and f(z) are both well defined and analytic and relatively easy to compute with pari.gp, and at each of these points f(z)=0, which leads to a singularity in and seems to be a problem...  Unlike tet(z), which has no zeros in the complex plane for f(z) does have zeros, and it has an infinite number of zeros.

Am I correct that one of your suggestions would be to instead look at the following function in the neighborhood of the zeros?


Then it would seem the value of z shifts a little.   The limit would be at the nearby point where where at which point no further numeric convergence is possible.
Code:
z0 is the value for m=0:           z0=5.31361674343693018580658 + 0.803861889686272103890852*I; f(z0)=0; beta(z0+1)=1
z4 is the limiting value for m=4:  z4=5.32119139366544998965263 + 0.816482374289017956146532*I; f(z4+4)=e^^3
- Sheldon
Reply
#44
(10/07/2021, 04:12 PM)sheldonison Wrote:
(10/07/2021, 05:20 AM)JmsNxn Wrote: All of the numbers you've posted evaluate small, but non-zero; no matter the depth of iteration I invoke. There also seems to be no branch-cuts after your singularities...

AHHH I see much more clearly. I think you are running into a fallacy of the infinite though.

(to begin, that should be , though (I'm sure it's a typo on your part).)

... My diagnosis of the singularities is loss of accuracy in the sample points of beta... And furthermore, straight up artifacts.

James,
I'm trying to understand your concerns.  It is true that I was focused exclusively on the zeros of .  At each of the points I listed, beta(z) and f(z) are both well defined and analytic and relatively easy to compute with pari.gp, and at each of these points f(z)=0, which leads to a singularity in and seems to be a problem...  Unlike tet(z), which has no zeros in the complex plane for f(z) does have zeros, and it has an infinite number of zeros.

Am I correct that one of your suggestions would be to instead loo
k at the following function in the neighborhood of the zeros?

Then it would seem the value of z shifts a little.   The limit would be at the nearby point where where at which point no further numeric convergence is possible.
Code:
z0 is the value for m=0:           z0=5.31361674343693018580658 + 0.803861889686272103890852*I; f(z0)=0; beta(z0+1)=1
z4 is the limiting value for m=4:  z4=5.32119139366544998965263 + 0.816482374289017956146532*I; f(z4+4)=e^^3

Let's consider

t(s+n) is close to 1 for n small and going to 1 for n large.

then

f(s+n+2)= exp(t(s+n+1) * f(s+n+1))

f is never zero so log f is never log(0).

So lets investigate ln ln f.

if ln ln f = log(0) then f must be 1 exactly.

f(s+n+2)= exp(t(s+n+1) * f(s+n+1)) = 1

ln f(s+n+2) = t(s+n+1) * f(s+n+1)

since t is close to 1 , and f(s+n+1) is never zero , t(s+n+1) * f(s+n+1) is never close to 0 ! ( but rather closer to k 2 pi i for k at least 1 in absolute value.)

so ln ln(f(s+n+2)) is never log(0).

by induction f = 1,e^e,... all do not give rise to log(0).

So log singularities are not " expected ".

essential singularities are also not much expected.

So its seems close to the real positive line we get analytic. ( t(s) is close to 1 there )

regards

tommy1729
Reply
#45
(10/09/2021, 12:27 PM)tommy1729 Wrote:
(10/07/2021, 04:12 PM)sheldonison Wrote:
(10/07/2021, 05:20 AM)JmsNxn Wrote: All of the numbers you've posted evaluate small, but non-zero; no matter the depth of iteration I invoke. There also seems to be no branch-cuts after your singularities...

AHHH I see much more clearly. I think you are running into a fallacy of the infinite though.

(to begin, that should be , though (I'm sure it's a typo on your part).)

... My diagnosis of the singularities is loss of accuracy in the sample points of beta... And furthermore, straight up artifacts.

James,
I'm trying to understand your concerns.  It is true that I was focused exclusively on the zeros of .  At each of the points I listed, beta(z) and f(z) are both well defined and analytic and relatively easy to compute with pari.gp, and at each of these points f(z)=0, which leads to a singularity in and seems to be a problem...  Unlike tet(z), which has no zeros in the complex plane for f(z) does have zeros, and it has an infinite number of zeros.

Am I correct that one of your suggestions would be to instead loo
k at the following function in the neighborhood of the zeros?

Then it would seem the value of z shifts a little.   The limit would be at the nearby point where where at which point no further numeric convergence is possible.
Code:
z0 is the value for m=0:           z0=5.31361674343693018580658 + 0.803861889686272103890852*I; f(z0)=0; beta(z0+1)=1
z4 is the limiting value for m=4:  z4=5.32119139366544998965263 + 0.816482374289017956146532*I; f(z4+4)=e^^3

Let's consider

t(s+n) is close to 1 for n small and going to 1 for n large.

then

f(s+n+2)= exp(t(s+n+1) * f(s+n+1))

f is never zero so log f is never log(0).

So lets investigate ln ln f.

if ln ln f = log(0) then f must be 1 exactly.

f(s+n+2)= exp(t(s+n+1) * f(s+n+1)) = 1

ln f(s+n+2) = t(s+n+1) * f(s+n+1)

since t is close to 1 , and f(s+n+1) is never zero , t(s+n+1) * f(s+n+1) is never close to 0 ! ( but rather closer to k 2 pi i for k at least 1 in absolute value.)

so ln ln(f(s+n+2)) is never log(0).

by induction f = 1,e^e,... all do not give rise to log(0).

So log singularities are not " expected ".

essential singularities are also not much expected.

So its seems close to the real positive line we get analytic. ( t(s) is close to 1 there )

regards

tommy1729

remark :


the idea that for all points s in a set A there are probably points close to s ; s* such that s* equals " whatever " is a flaw.

Why ? 

Well because we prove it for a point s in the set A , therefore it holds for any arbitrary point s in the set A.

Or in others words it holds for ALL s in the set A.

Since nearby points s* also belong to this ALL s in the set , proving it for any random point s in the set A is sufficient.

Ok ok at the boundary of the set A we can consider points s* that are outside of the set A ofcourse.

But the conjecture applies only the set A so that is not an issue.

It does imply though that even for an infinitesimal closeness the boundary of the set A is problematic and hence the argument works very well for OPEN sets A.

In other words it works perfectly for the open set A bounded by a jordan curve.
Or restated a simply connected space A.

The set A is ofcourse where t(s) is very close to 1.



regards

tommy1729
Reply
#46
(10/07/2021, 04:12 PM)sheldonison Wrote:
(10/07/2021, 05:20 AM)JmsNxn Wrote: All of the numbers you've posted evaluate small, but non-zero; no matter the depth of iteration I invoke. There also seems to be no branch-cuts after your singularities...

AHHH I see much more clearly. I think you are running into a fallacy of the infinite though.

(to begin, that should be , though (I'm sure it's a typo on your part).)

... My diagnosis of the singularities is loss of accuracy in the sample points of beta... And furthermore, straight up artifacts.

James,
I'm trying to understand your concerns.

I understand Sheldon, but I feel you've mistaken coding to mathematics.

Quote:It is true that I was focused exclusively on the zeros of .  At each of the points I listed, beta(z) and f(z) are both well defined and analytic and relatively easy to compute with pari.gp, and at each of these points f(z)=0, which leads to a singularity in and seems to be a problem...  Unlike tet(z), which has no zeros in the complex plane for f(z) does have zeros, and it has an infinite number of zeros.

This works perfectly Sheldon, and I agree with you entirely right now. It is your next statement that loses track of the recursive process.

Quote:Am I correct that one of your suggestions would be to instead look at the following function in the neighborhood of the zeros?

Yes.

By doing this, we already move your singularities to the left by 10, in my code.


What you've written as the central object is:



Which has singularities. Remember each time you iterate this equation it shifts us to the left of the rightside of this equation.

What we want is to iterate the relation:




Rather flatly, Sheldon. Any singularity in gets pushed to the left as we continue the iterations. Because:



If then this doesn't imply that . Actually it implies that  is a singularity mess. But, performing the proper iterations ; we don't get these singularities, because we push forward consistently.


So yes, every has singularities; for all n, but they move to the left as we take . Luckily, this happens very very fast. So at about we're good for about . And luckily, again, this produces about 100 digit precision that fast.



I've taken more time to look at your questions about the Taylor series converging; and I'm not much better off. But I will say this. If I code everything with \p 1000 and \ps 300, the Taylor series get significantly more accurate (but still less than desired). The trouble it seems; is that the Taylor series are very slow converging.

This can be explained rather simply by your test too. If,



Where, (I'm using for the periodic tetration); which are real valued.

It has a radius of convergence , yes; but these terms oscilate negative and positive a tad chaotically. If we write,



We might not get enough cancellation in the oscillating taylor series of . Actually; we have much better luck running,



Which will be much more accurate (about 1E-11 for \p 100 and \ps 35 versus 1E-5 for the first test). The trouble seems to be, once we add an oscillation (a negative argument), the Taylor series like to converge very very slow. I suspect, most definitely, this is happening because of the essential singularities along the lines . We diverge brutally here, not just singularity wise, but along the entire line. This is the worst kind of singularity. Even at we can expect the beginning of an extreme divergence.

So, in essence, is less accurate than . Pushing forward is more accurate than pulling back. Again, this is perfectly possible. It is not a contradiction, or a falsehood--it's just a god damned annoying inconvenience when calculating this.



My second point will be if we take ; Taylor series are working absolutely fantastically. We need to take about n=115 iterations at this point, rather than n=10, but Taylor series converge to your desire. You can make them 1E-100 with your initial test, like it's nothing. This is because we're near the fixed point (the fixed point of with minimal positive imaginary part).

I've only been half successful though at applying to get us back to accurately. We lose Taylor series data pretty fast. So this will only work for about \ps 20; and as I said before; this is not enough to sus out a picture of the tetration.

But, as the math will tell you. If we are holomorphic for and ; we are certainly holomorphic in the whole strip as we push forward. But, we aren't guaranteed as well behaved Taylor series--especially in Pari.



To conclude, I respect your arguments. But I'm still not convinced. My code is rather awful, and I'll be the first to admit it. But to code this better is really really hard. We don't get a nice Schroder function taylor series that's well behaved, and a nice theta mapping as a well behaved fourier series. We have to use recursion (at least at the moment), and it's volatile. Very fucking volatile. To get accurate readouts we need very large values of beta, and this diverges as fast as tetration; which means we are opening ourselves up to insane inaccuracies.
Reply
#47
(10/09/2021, 12:27 PM)tommy1729 Wrote: ....
Let's consider

t(s+n) is close to 1 for n small and going to 1 for n large.

then

f(s+n+2)= exp(t(s+n+1) * f(s+n+1))

f is never zero so log f is never log(0).

So lets investigate ln ln f.

if ln ln f = log(0) then f must be 1 exactly.

f(s+n+2)= exp(t(s+n+1) * f(s+n+1)) = 1

ln f(s+n+2) = t(s+n+1) * f(s+n+1)

since t is close to 1 , and f(s+n+1) is never zero , t(s+n+1) * f(s+n+1) is never close to 0 ! ( but rather closer to k 2 pi i for k at least 1 in absolute value.)

so ln ln(f(s+n+2)) is never log(0).

by induction f = 1,e^e,... all do not give rise to log(0).

So log singularities are not " expected ".

essential singularities are also not much expected.

So its seems close to the real positive line we get analytic. ( t(s) is close to 1 there )

regards

tommy1729

Thank you, Tommy. This is a great alternative angle. It continues to remind me there are many ways of ensuring the beta method is holomorphic. I'm still set that these are two very different ways of looking at the problem. The beta way, or the gaussian way--the additive, versus the multiplicative; even though both are technically the same through a variable change.

Edit: To Sheldon, the difference between my method and tommy's method is and ; but both are related by a change of variables--hence equivalent.
Reply
#48
Hi James,
Just an update.  I've been playing with beta speedup optimizations of about 3 orders of magnitude speedup, and what speeds up beta speeds up everything else.  This lets me graph the resultant tetration function pretty easily, and show where it misbehaves etc, and easily generate accurate values for large numbers of derivatives.  

I also wanted to go back to my original approximation for where the zeros of  are, which is approximately where 
 
and more rigorously justify the approximation that f(z) has a zero nearby and also that the limiting function also a a zero nearby.  I made some progress making these two approximations a little more rigorous.  edit: The graph below is of ; another interesting graph would be .

I also made progress in handling the most "correct" logarithmic branch, which makes possible graphs of the resulting Tet(z) function near Tet(0) which has a "pseudo" radius of convergence of 0.489, based on zero #6 of f(z) as the closest zero of f(z) causing misbehavior in the Tet(z).  The zero is at f(5.11154320063377 + 0.324420060794418*I), and tet is centered at ~= beta(1.7448 ).  There are also an infinite number of other singularities closer to the origin for larger values of z.  This is at a radius=0.5, where the logarithmic singularities cause the sawtooth behavior in this graph.     
- Sheldon
Reply
#49
I could not sleep because I felt I did not clarify my previous posts enough.

The point is I showed that the first 2 logaritms do not pose a problem.

However the third and fourth logs might be problematic.

With the first 2 logs I was able to " magically " remove the logs , but with ln ln ln ln f( s + 4) this will be pretty hard.

so the 3rd and 4th log might give rise to singularities.

however this does not immediately imply that the limit has a singularity or does it ?
Lets investigate

assume
r(s) = ln^[n]  f(s+n) = ln(0)

then r2(s) = ln^[n+1] f(s+n+1) = ln^[n] ( f(s+n) t(s+n) ) 
For finite n this probably does not equal ln^[n]  f(s+n) so it does not follow automatically ...



so those two things ( 3rd and 4th log and r(s), r2(s) ) kept me awake.

The situation is still unclear and I have not even considered the "problem" of branches and periodic points etc.

also although ln(0) for a given n might not be an issue , for a larger value of n say m we might also run into log(0) so it does completely resolve things either !!!!

 
---

So to think better about the branches , I need to formalize.

And that formilazation is just a proposal/conjecture because it might not be the best aka be analytic or equivalent.

f(s+1) = exp( f(s) t(s) )

then 

ln f(s+1) = f(s) t(s) = f(s + h(s))

and

exp( f( s + h(s) ) = f(s+1)

and 

f( s + h(s) + 1) = exp( f(s + h(s)))  t(s + h(s)) )

therefore

f( s + h(s) + 1)^( t(s+h(s))^{-1} ) = f(s+1).

So the 2 fundamental equations for h(s) are :

f(s) t(s) = f(s + h(s))

and 

f( s + h(s) + 1)^( t(s+h(s))^{-1} ) = f(s+1)

In a way we got rid of exp and ln here.

How many solutions h(s) do we get ??

I mentioned h(s) before but then ( i believe ) i only used the first equation. That had many solutions.

how close h(s) is to zero , what branches it implies etc is all closely related ofcourse.

Once we understand h(s) we continue ;

ln ln f(s+2) = ln ( f(s+1) t(s+1) ) = ln( f(s+1 + h(s+1)) ) = f(s+h(s+1))  t(s+h(s+1)) = f( s + h(s+1) + h(s + h(s+1)) ).

And it is clear we can continue "removing" logs with those h's.

the speed of growth of h is ofcourse very important for the convergeance and analyticity of ln^[n] f(s+n) !!

regards 

tommy1729
Reply
#50
After I finished my zoom call with Sheldon, I've realized a couple of things.

Unless you can show that shows the elimination of singularities; this is a nowhere analytic solution on . It is still holomorphic almost everywhere on though. So all is not lost yet. But the saw tooth effect (which I myself have noticed) is a real thing (not a glitch like I so thought); so unless at n = 10000000 or whatever, the saw tooth disappears, we succumb ourselves to nowhere analytic on . The trouble is, on the real line, our iterations cap at about n =4; so there is zero way to confirm this numerically unless you had a supercomputer.

We should remember though, and you too, Tommy; that this is the periodic tetration. The actual tetration we care about is created by limiting the period to infinity. And the preliminary tests, as I've always done it, show little to no sawtooth effect in the final beta-tetration. Which is when we let the multiplier move to . Here, is where we lose sawtooth data. I am still on the fence about nowhere analycity on for this case; but the upper half plane should be fine. Just how, the strip is fine for the case. And I can show this with much better numbers, now that Sheldon optimized my code.

To those interested in the main difference between mine and Sheldon's code; it's kind of silly really. The way I wrote the initialization file was for a free variable of the multiplier. So you could run any multiplier and everything would work. Sheldon chose to fix the multiplier to --and upon doing this, everything was streamlined. I never expected that sole move to speed everything up so much.

I'm in the process of creating much better code, which hybridizes both methods. Unfortunately, as you could before, you can't graph across the multiplier. And the multiplier is no longer a free variable. It is fixed to whatever constant you choose to set it to. Similarly; I've added a protocol for any base. And again, we have to fix the base, and we have to fix the multiplier; but we still get Sheldon level speeds. I've added rudimentary normalization protocols, but they are still slow running.

But now, finding a function:




Is a hell of a lot easier.

So in that sense, you can initialize beta in a couple of seconds; the trouble will be when initializing a normalization constant so that \(\text{Sexp}(0) = 1\); this takes much longer. Especially for bad bases. And there is no way to graph across \(b\), or \(\lambda\). \(b\) and \(\lambda\) are fixed values. This is still just as fast as Sheldon's code. As his was a specialization for one base and multiplier, and a simple initialization. Mine works similarly but will work for arbitrary bases and multipliers. And it will work much better--I cleaned up a couple of Sheldon's protocols.

An important note, is that I always consider the "base" of the exponential \(b\) as \(\exp(bz)\) as opposed to \(b^z\)... this fixes so many errors. I apologize if this is confusing. We are taking the log of what we usually call the base; but we have a free imaginary argument.

I'll release this code soon. I'm just working on making an efficient graphing protocol; which works with the error catching protocols Ember Edison described.

Tommy; Remember that all is not lost because the \(2 \pi i\)-periodic tetration is not analytic on \(\mathbb{R}\). This is possible, per my construction. The trouble would be when we limit \(\lambda \to 0\). If this is not analytic, then we have a problem. And the more I think about it. The reason is analytic on is because it's Kneser. I'm seeing more and more evidence of this. This would solidify that the only analytic tetration which takes bijectively is Kneser.

Regards, James
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  Tommy's Gaussian method. tommy1729 24 3,952 11/11/2021, 12:58 AM
Last Post: JmsNxn
  Calculating the residues of \(\beta\); Laurent series; and Mittag-Leffler JmsNxn 0 156 10/29/2021, 11:44 PM
Last Post: JmsNxn
  The Generalized Gaussian Method (GGM) tommy1729 2 390 10/28/2021, 12:07 PM
Last Post: tommy1729
  tommy's singularity theorem and connection to kneser and gaussian method tommy1729 2 495 09/20/2021, 04:29 AM
Last Post: JmsNxn
  Why the beta-method is non-zero in the upper half plane JmsNxn 0 342 09/01/2021, 01:57 AM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 734 07/22/2021, 03:37 AM
Last Post: JmsNxn
  Improved infinite composition method tommy1729 5 1,280 07/10/2021, 04:07 AM
Last Post: JmsNxn
  Generalized Kneser superfunction trick (the iterated limit definition) MphLee 25 8,240 05/26/2021, 11:55 PM
Last Post: MphLee
  Alternative manners of expressing Kneser JmsNxn 1 901 03/19/2021, 01:02 AM
Last Post: JmsNxn
  A different approach to the base-change method JmsNxn 0 730 03/17/2021, 11:15 PM
Last Post: JmsNxn



Users browsing this thread: 2 Guest(s)