Revisting my accelerated slog solution using Abel matrix inversion
#11
(01/18/2019, 06:35 AM)jaydfox Wrote: The good news is, the acceleration can be taken a step further, by removing the algebraic singularity that remains after removing the logarithmic singularity.  A few years ago, I determined that my accelerated solution appears to be converging on an algebraic singularity in terms of \( (z-L)^{2\pi i / L} \), or rather a power series in such.  (I typically call the fixed point a in my code, but I'm using L here, since that's what you mentioned in your post.)

I can't remember now if the exponent for the algebraic term is 2*pi*i/L, or if it's the reciprocal, L/(2*pi*i).  I apologize if I had the fraction upside down.  I don't have my notes in front of me, so I'm doing this from memory.  When I get around to posting more details on the algebraic singularity, I'll be sure to clarify the exponent and provide a proof.
~ Jay Daniel Fox
#12
(01/17/2019, 06:44 PM)sheldonison Wrote: The main region matches Jay's slog series to better than 76 decimal digits inside most of the circle!  The results are so good that for all intents and purposes, the two functions are the same, except near the two singularities as can be seen from the image.  Even near the singularity but before it turns black, at z=i, the two match each other to within 5E-74.  The plot turns black when consistency is below ~70 decimal digits and consistency is better than 5E-64 everywhere in the circle of radius 1.1.  As before, I am using the first 692 Taylor series coefficients of Jay's slog, where a_692~=2E-111.

Hi Sheldon, I had noticed that the accuracy you mentioned (76 digits) was really close to 256 bits.  I perhaps read your post too quickly, and I assumed this limitation in accuracy was in your solution.  But it finally dawned on me that this limitation was in my solution, or rather in the coefficients I posted.

If you were at all interested in doing this analysis again, with higher precision, I am attaching the first 800 coefficients of my 4096-term solution, to 1024 bits of precision.  (Based on my previous research, my matrix solver loses about 11 bits of precision for every 8 terms I add.  I started with 7168 bits, and lost about 4096*11/8 bits, leaving about 1536 bits, though this is only an estimate.  I'm reasonably confident that the first 1300-1400 bits are precise, so these 1024-bit terms should have full precision.)


Attached Files
.txt   slog_coeffs_4096_acc_800terms_1024bits.txt (Size: 245.88 KB / Downloads: 694)
~ Jay Daniel Fox
#13
(01/18/2019, 06:35 AM)jaydfox Wrote: Wow, thanks for the analysis!  I still haven't attempted to get into the details of your slog solution and how you calculate the Kneser mapping, because I'm still trying to understand my solution.  I mean, I understand it at a high level, but I keep seeing patterns and trying to tie them together.

But I think I have an answer to your question above, and unfortunately, it's not a very exciting answer.  Brace yourself:

The first term in my Taylor series is inaccurate.

.... The algebraic singularity, with it's complex periodicity, is what allows the Kneser solution to have non-trivial branches (where the value in one branch is not merely a constant difference from the value of the same point in a different branch).  

Lets keep the focus on your Jay's slog for now.  I have some pretty pictures; along with some equations, and later I'll make a new post consolidating other posts and explaining how Sheldon's fatou.gp works.  This thread is helpful because it clearly shows that very good solutions that are self consistent to a very high degree can actually be \( \theta \) mappings of Kneser's solution which is really really cool to me and very exciting, so I disagree with your statement, "its not a very exciting answer."  Someday when I write a published paper I will use this thread in my explanation of Sheldon's slog and what is required to mach Kneser's slog.

I like the term "algebraic singularity"; it is also really cool, but complicated.  The Period is \( \frac{2\pi i}{L} \), and when I investigated it, the algebraic singularity magnitude tracked Period^n.  That's from memory and I don't remember the equations; I look forward to your proof.

So, now we have Jay's slog, and we can convert it to a very good approximation of Kneser's slog as follows:

\( \text{KneserSlog}(z)=\alpha(z)+\theta_s(\alpha(z))\approx\text{JaySlog}(z)+\theta_\R(\text{JaySlog}(z)) \)

Here \( \theta_\R \) is the theta mapping at the real axis with complex conjugate pairs required to keep the JaySlog real valued at the real axis, and \( \theta_s \) is the unique theta mapping of the Schroder/Abel function which generate Kneser's slog. 

The rest of this is my speculative musings ... If \( \theta_\R \) has n complex conjugate pair terms, then can we use those n unknown terms to force n terms of y^-1, y^-2, ... y^-n to be zero in the \( \theta_s \) mapping???? ...  hmmmm ..... perhaps this is a fixer-upper equation???? is it a linear equation or something more complicated????  A matrix????    Can we use for example the 60 sample points using  \( \alpha^{-1}(z_0..z_0+1) \) from my previous thread in terms of the n unknown terms in \( \theta_\R \) to write a matrix equation????
- Sheldon
#14
jaydfox Wrote:the accuracy you mentioned (76 digits)... I assumed this limitation in accuracy was in your solution.   But it finally dawned on me that this limitation was in ... the coefficients I posted.

Correct.  I regenerated my slog, and optimized the sample points to get as much accuracy as possible in the \( \theta_R \) mapping to compare Kneser slog and Jay's slog.

There is the limitation that your 800th term is 10^-126, so we need to keep the sample radius as small as possible.  Here, the \( \theta_\R \) sample points are with slog(z) where |z|<0.706, which is the radius of the red circle.  Sampling on the green dots sexp(0.431i-1.405...0.431i-0.405), we are limited to 35 complex conjugate pairs in the Fourier series, before the Fourier series becomes random noise.
   

With this Fourier series sampling, it is impossible to theoretically do better than 247 decimal digits since 10^-126*0.706^800~=10^-247.  And in fact, for a large chunk of the red circle of radius=0.706, I'm seeing accuracies of about 244 decimal digits with this 35 complex conjugate pair terms in the \( \theta_\R \) mapping.  All points within the red circle are accurate to 1.15E-213 with this theta mapping.  All points within a unit circle are accurate to within 1.5E-126, limited by the 800 terms in Jay's slog series.  It is intuitive to compare the error plot below to the red circle above, and the Fourier series sample points within the red circle.  Here is the error plot equation: precision is limited by the number of terms in Jays slog, and the number of terms in the \( \theta_\R \) mapping.
\( \frac{10^{-240}}{\text{JaySlog}(z)-\text{KneserSlog}(z)-\theta_\R(\text{KneserSlog})} \); 
   

The Kneser slog used was accurate to 276 decimal digits, which is the most accurate computation I've ever done.  A Fourier transform just on the real axis could have 322 decimal digits of consistency, but then all we would have is a line, instead of an enclosed region in the complex plane.
- Sheldon
#15
This is kind of a cool error plot where I calculated Jay's slog to 120 terms and computed an 8 term theta mapping:
\( \theta_{Rk}\mid\;\text{JaySlog}(z)\approx\text{KneserSlog}(z)+\theta_{Rk}(\text{KneserSlog}(z)) \)
\( \text{errplot}=\frac{10^{-29}}{\text{JaySlog}(z)-\text{KneserSlog}(z)-\theta_{Rk}(\text{KneserSlog})} \)  

The Jay's slog Taylor series is centered at zero!  The region of "good convergence" extends all the way out to nearly real(2)!  The plot goes from real(-1) to real(2).    On the sides, the error is dominated by approximately the 100th term of the JaySlog series, and in the vertical, it is limited by both Jay's slog and the corresponding 8 term theta mapping.  For the most part, these two functions are consistent to 10^-32 or so.  Without the theta mapping, the error is much larger; around 10^-17 or so. 
   

If I optimize the theta mapping closer to the real axis with the same 120 term JaySlog solution, its possible to do even better, with consistencies better than 10^-64, but over a smaller region.  This error plot uses a 22 term theta mapping, with 10^-62 in the numerator of the error plot.
   

I think the next step is to figure out how to compute the reverse theta mapping \( \theta_{Rj} \) starting from only the Schroder complex valued Abel function and without starting with Kneser's slog, and then using \( \theta_{Rj} \) to approximate Kneser's slog from Jay's slog.  More later...
\( f(z)=z+\theta_{Rk}(z);\;\;\;f^{-1}(z)=z+\theta_{Rj}(z) \)
\( \text{KneserSlog}(z)\approx\text{JaySlog}(z)+\theta_{Rj}(\text{JaySlog}(z)) \)
- Sheldon
#16
(01/23/2019, 09:44 PM)sheldonison Wrote: The Jay's slog Taylor series is centered at zero!  The region of "good convergence" extends all the way out to nearly real(2)!  The plot goes from real(-1) to real(2).    On the sides, the error is dominated by approximately the 100th term of the JaySlog series, and in the vertical, it is limited by both Jay's slog and the corresponding 8 term theta mapping.  For the most part, these two functions are consistent to 10^-32 or so.  Without the theta mapping, the error is much larger; around 10^-17 or so. 

I remember Andrew Robbins noticed something similar with his (unaccelerated) slog.  It appears to be centered at 0, with a radius of convergence (which I determined was limited by the logarithmic singularities), but the series is also convergent in a second approximately circular region centered at 1, and also limited in radius by the singularities.  To me, the region looks like a peanut shell.

This seemed odd to me at first, because this seems to run counter to the concept of a radius of convergence.  But the key to understanding this (and understanding how the solution converges as the matrix size is increased) is to recognize that the inverse Abel matrix solution (accelerated or not) is a polynomial.  It is not an infinite Taylor series.  It's not even a finite truncation of an infinite Taylor series.  It is a polynomial.  It will eventually converge on a Taylor series (hopefully), but any particular solution is a polynomial.

For example, for my 4096-term solution, I estimate that the first 600-700 terms are accurate to within 1% of the true value.  So what are the remaining 3300 terms doing?

They encode the information (in higher order derivatives) necessary so that you can recenter the polynomial from 0 to 1, and still have 600-700 terms accurate to within 1%.  And you might be asking yourself, why does it have the information encoded to have 600-700 accurate terms for polynomials centered at 0 and at 1?

Because that's how the Abel matrix was defined!!!

By definition, the solution to an N-term finite truncation of the Abel matrix will have the first N derivatives equal at x=0 and at exp(x) = 1.  That's also why the internal consistency is so high, even though the accuracy is rather poor.  The 4096-term solution has internal consistency with an error term that is O(x^4097).  The difference between Andrew's slog and my slog is that my acceleration technique exponentially reduces the error constant in the big-Oh error term.
~ Jay Daniel Fox
#17
jaydfox Wrote:I remember Andrew Robbins noticed something similar with his (unaccelerated) slog.  It appears to be centered at 0, with a radius of convergence (which I determined was limited by the logarithmic singularities), but the series is also convergent in a second approximately circular region centered at 1, and also limited in radius by the singularities.  To me, the region looks like a peanut shell.

The peanut shell region is really cool.  Normally, one views the function as centered at slog(z=0), like your truncated 800 term series.  But if we used all 4000 terms to full accuracy, then we have two peanut regions of very good convergence, one peanut centered around zero, and the other peanut centered around one, with another hollow peanut shell of even better convergence connecting the two peanuts!  I conjecture that it may very well be that the best behaved part of Jay's slog and Andrew's slog is the hollow peanut shell between the two regions of convergence, between zero and one.... instead of around either individual peanuts. More on a real example of this in a later post.  I want to talk about theta \( (\theta) \) mappings.

In my last post, I started using a naming convention to distinguish the two different real axis real valued 1-cyclic theta mappings which behave similarly.
\( f(z)=z+\theta_{Rk}(z);\;\;\;f^{-1}(z)=z+\theta_{Rj}(z) \)
\( \text{KneserSlog}(z)\approx\text{JaySlog}(z)+\theta_{Rj}(\text{JaySlog}(z)) \)
\( \text{JaySlog}(z)\approx\text{KneserSlog}(z)+\theta_{Rk}(\text{KneserSlog}(z)) \)

In my error plots, I used \( \theta_{Rk}(z) \) to approximate Jay's slog when starting from Kneser's slog, and then plotting the error term.  What we really want next is a way to approximate the inverse function involving \( \theta_{Rj} \) so as to use Jay's slog to get a superb approximation of Kneser's slog, and we want to do so without starting with Sheldon's solution for Kneser's slog.  To do that we start with this equation, which involves a third 1-cyclic theta mapping, which generates Kneser's slog from the complex valued Abel/Schroder function.  What this equation says is that if you start with the complex valued abel function, and add a 1-cyclic theta mapping to it, then you can get Kneser's real valued slog, within a suitable region of convergence, and then Kneser's slog is extended to the entire complex plane via a Schwarz reflection.

\( \text{KneserSlog}(z)=\alpha(z)+\theta_s(\alpha(z)) \)
\( \text{KneserSlog}(\alpha^{-1}(z))=z+\theta_s(z) \)
\( \theta_s(z)=\text{KneserSlog}(\alpha^{-1}(z))-z;\;\; \) plugging in the complex valued superfunction

\( \theta_s(z)\approx\text{JaySlog}(\alpha^{-1}(z))+\theta_{Rj}\left(\text{JaySlog}(\alpha^{-1}(z))\right)-z;\;\; \) approximation for Kneser's slog from Jay's slog; from above

The general form theta_rj, and theta_s are:
\( \theta_{Rj}(z)=\sum_{n=1}^{\infty}a_n\exp(2n\pi i z) + \overline{a_n}\exp(-2n\pi i z) - 2\Re(a_n) \)
\( \theta_{s}(z)=\sum_{n=0}^{\infty}b_n\exp(2n\pi iz)+\sum_{n=1}^{\infty}c_n\exp(-2n\pi iz)\mid\;c_n=0 \)
Note that for Kneser's slog, all of the "c_n" terms in \( \theta_s \) are all zero, which is not the case when approximating \( \theta_s \) from Jay's slog.  So, we need to calculate these undesirable c_n error terms for Jay's slog, and then force all of the c_n terms to zero by using \( \theta_{Rj} \).  Since Jay's slog is an approximation, we'll probably only force the first few terms to zero.  For example, for a 120 term compuation of JaySlog, we might force the first 10 terms c_1...c_10 to zero which should give accuracy>50 decimal digits at the real axis; more in a later post.

We start with the simplest approximation where \( \theta_{Rj} \) has one unknown term, where we approximate \( a_1\mid\;c_1=0 \)
\( \theta_{Rj}(z) = a_1 \exp(2\pi i z) + \overline{a_1} \exp(-2\pi i z) - 2\Re(a_1) \)
Unfortunately, we actually need two equations in two unknown, which are complex conjugates of each other for the upper/lower halves of the complex plane are as follows:
\( c_1 \) in terms of \( a_1;\;\overline{a_1};\;\;\; \)  upper half of complex plane; \( \theta_s \)
\( \overline{c_1} \) in terms of \( \overline{a_1};\;a_1;\;\;\;\; \) lower half of complex plane; \( \overline{\theta_s} \)

Equivalently, one could have two equations in two unknowns, one equation for the real part of a_1, and a second equation for the imaginary part of a_1, and then we use those two equations to force the real and imaginary parts of c_1 to zero.

So lets sample some points for the complex valued \( \alpha^{-1} \) superfunction along the dotted line from the previous posts so as to get an approximation for c_1.  At each sample point, we have an equation in terms of Jay's slog, and the two unknowns, a_1 and a*_1.  Now we combine these equations to get an approximation for \( \Re(c_1);\;\Im(c_1) \), which we desire to have the value of zero.  

We could repeat the process, with four unknowns \( a_1;a_2;\overline{a_1};\overline{a_2}; \), and solve for \( c_1;c_2;\overline{c_1};\overline{c_2}=0 \)
with six unknowns \( a_1;a_2;a_3;\overline{a_1};\overline{a_2};\overline{a_3}; \), and solve for \( c_1;c_2;c_3;\overline{c_1};\overline{c_2};\overline{c_3}=0 \)

So, in general to solve for n terms in the approximation for the real valued 1-cyclic mapping, we need to solve a 2nx2n matrix for the real and complex values of \( a_1; a_2; ... a_n \).   Then it is just a matter of programming such a complicated indirect fourier series, followed by solving the 2nx2n matrix, which is somewhat complicated due to the real and complex parts of each coefficient, and lots of other details, but it should simply be a programming problem.  Then, in theory the result is a much much more accurate version of Jay's slog.  
\( \text{JaySlog}(z)+\theta_{Rj}(\text{JaySlog}(z)) \)
More later...
- Sheldon
#18
I added a criitical clarification on how to calculate \( \theta_s(z) \) in my previous post.

\( \theta_s(z)=\text{KneserSlog}(\alpha^{-1}(z))-z;\;\; \) plugging in the complex valued superfunction
\( \theta_s(z)\approx\text{JaySlog}(\alpha^{-1}(z))+\theta_{Rj}\left(\text{JaySlog}(\alpha^{-1}(z))\right)-z;\;\; \) approximation for Kneser's slog from Jay's slog; from above

So this gives us an equation for \( \theta_s \) in terms of Jay's slog with Jay's slog \( \theta_{Rj} \).  We sample the superfunction, \( \alpha^{-1}(z) \) at a set of equally spaced points, and then use those samples points to take the Fourier transform of \( \theta_s \) at those sample points.  Then we drive the undesired terms of the \( \theta_s \) fourier transform to zero in terms of the \( \theta_{Rj} \) unknown terms by a matrix simultaneous equation solution.

The programming is complicated, because we need a the complex valued Schroder function and its inverse, plus lots of other details.  I will post the pari-gp code, once I finish cleaning it up a little more, along with some results. Anyway, remarkably this all works and generates superbly accurate results for Kneser's slog in terms of Jay's slog and a 1-cyclic real valued fourier transform which is calculated via a matrix simultaneous equation in terms of the Schroder function.  The results are generated without starting with Kneser's slog, but match Kneser's slog much better than one would have naively predicted since the \( \alpha^{-1}(z) \) samples can be chosen to take maximum advantage of the peculiar peanut shaped convergence region of Jay's slog.  More later...
- Sheldon
#19
(01/27/2019, 12:42 AM)sheldonison Wrote: Note that for Kneser's slog, all of the "c_n" terms in \( \theta_s \) are all zero, which is not the case when approximating \( \theta_s \) from Jay's slog.  So, we need to calculate these undesirable c_n error terms for Jay's slog, and then force all of the c_n terms to zero by using \( \theta_{Rj} \).  Since Jay's slog is an approximation, we'll probably only force the first few terms to zero.  For example, for a 120 term compuation of JaySlog, we might force the first 10 terms c_1...c_10 to zero which should give accuracy>50 decimal digits at the real axis; more in a later post.

We start with the simplest approximation where \( \theta_{Rj} \) has one unknown term, where we approximate \( a_1\mid\;c_1=0 \)
\( \theta_{Rj}(z) = a_1 \exp(2\pi i z) + \overline{a_1} \exp(-2\pi i z) - 2\Re(a_1) \)
Unfortunately, we actually need two equations in two unknown, which are complex conjugates of each other for the upper/lower halves of the complex plane are as follows:
\( c_1 \) in terms of \( a_1;\;\overline{a_1};\;\;\; \)  upper half of complex plane; \( \theta_s \)
\( \overline{c_1} \) in terms of \( \overline{a_1};\;a_1;\;\;\;\; \) lower half of complex plane; \( \overline{\theta_s} \)

Equivalently, one could have two equations in two unknowns, one equation for the real part of a_1, and a second equation for the imaginary part of a_1, and then we use those two equations to force the real and imaginary parts of c_1 to zero.

I've been trying to figure out on my own how to eliminate those coefficients for terms with negative exponents, and my progress has been frustratingly slow. I was re-reading your post more carefully, and I see you already have found a way. I will definitely take the time to understand this, because it seems to be key to solving the Riemann mapping numerically. If you already have a sexp function with unlimited precision and accuracy, the DFT (Cauchy Integral) should return 0 for all the negative exponents. If you use low-accuracy approximation of sexp, even one with very high precision internal consistency (using the reversion of my jslog as just one example; there are others), then you get non-zero terms with negative exponents when you calculate the DFT. In theory, a proper 1-cyclic mapping (what you call a theta function) would allow you to compensate for the negative exponents and effectively cancel them out. I'm still rebuilding my code library, so I'm not at a point where I can test this, but I am very curious to see your next results.

If you're curious what I'm working on, I have a few mini projects going. I'm recoding my LU decomposition library, so that I can solve arbitrary system sizes up to several thousand terms. I'll be able to save the LU decomposition to disk, so I can do forward and back substitution of any system size up to the size of the full system. For example, I could do another 4096x4096 system (which would take days), then I could decide that I want the 3072-term solution. I could solve that in about 10 minutes, give or take.

I'm also working on code to generate the coefficients of the Schroeder and inverse Schroeder functions, in terms of ln(b) and a given fixed point c, for arbitrary base b and fixed point c. I'm not sure it works for attracting or parabolic fixed points, but it should work for repelling fixed points. I'm trying to make the code faster, as the calculation time seems to be increasing at about the 8th power of n, for n terms. For example, it takes about 16 seconds to calculate 20 terms, but over 6 minutes to calculate 30 terms. Once the terms are calculated, the Schroeder and inverse Schroeder functions for an arbitrary base b and fixed point c can be calculated in mere seconds. The solutions will be exact, to machine precision, unlike the DFT method I was using previously, which required performing hundreds (or thousands) of iterated logarithms of hundreds (or thousands) of sample points, with considerable excess precision during the calculations in order to get a desired precision.

I'm also playing around with my FFT library, trying to figure out how to cancel out negative terms. I had made some progress in understanding the problem, but hadn't yet made progress in canceling the terms. I'm going to go through your post one more time, and see if I can figure it out.
~ Jay Daniel Fox
#20
Sheldon Wrote:Anyway, remarkably this all works and generates superbly accurate results ...

Here is my pari-gp code which is hopefully semi understandable with the help of my other posts in this thread. It doesn't have my Kneser slog(z) approximation, and instead generates the two 1-cyclic theta mappings, \( \theta_{Sj}(z);\;\;\theta_{Rj}(z) \).  With this example, I started with a  120 term jaytaylor series giving jayslog(z) accurate to about 16.5 decimal digits, and then generated a pair of theta mappings, so that the resulting jayslogtht is accurate to ~57 decimal digits over the entire complex plane!!!

.gp   jpost.gp (Size: 391.12 KB / Downloads: 677)
Code:
\r jpost.gp
/* helpjpost(); other stuff in jpost.gp after you try the example below      */
\p 77                    /* default value; set higher if desired             */
...
/***** here is an example you can try, with results accurate to 10^-57   *****/
computejslog(120);       /* compute jtaylor with 77 decimal digits           */
jslog(z); jslog(0.5);    /* jay's slog accurate to about 16.5 decimal digits */
/* calcthts computes fourier series of jslog(complex-superfunction)          */
z0rgt=0.9455+0.4824*I;   /* default value for right edge of fourier series   */
/* first, compute fourier series of jslog(isuperf(z)) w/out thtr mapping     */
/* calcthts(thtr,n,z0rgt); thtJr=0; 80 samples, z0rgt printout abs(thts)     */
/* calcthts computes fourier series of jslog(complex-superfunction)          */
/* calcthtr(termstocancel,n,z0rgt); termstocancel based on printout above    */
/* calcthtr also calls and computes thts; enables slogthtr and slogthts      */
calcthts(0,80,z0rgt);    /* test run! 80 samples, to estimate terms of thtr  */
calcthtr(28,80,z0rgt);   /* calculates thtr with 28 terms and 80 samples     */
slogthtr(0.5)            /* jslog(z)+thtr(jslog(z)); >59 decimal digits      */
slogthtr(0.25+0.1*I)     /* accurate to greater           >59 decimal digits */
slogthtr(z0rgt)          /* accurate to greater           >56 decimal digits */
slogthts(0.7*I)          /* isuperf(z)+thts(isuperf(z)); >58 decimal digits  */
jslogtht(z);       /* if imag(isuperf(z))>Imag(isuperf(z0rgt)) use slogthts  */
jslogtht(z);       /* if imag(isuperf(z))<Imag(isuperf(z0rgt)) use slogthtr  */

The code in jpost.gp was mostly borrowed from my fatou.gp code here online at math.eretrandre.  I stripped out  anything I didn't need including the slog (kneser slog approximation), but kept things like the formal Schroder function and its inverse (called by initSchroder), and the fixed point point routine (called CalcFixed), and the complex valued Abel function and its compliment (isuperf(z) and superf(z)), \( \alpha(z);\;\alpha^{-1}(z) \).  The computejslog(i) creates jtaylor which is used by the jslog(z) routine which both came from ... JayDFox a long time ago.  I had incorporated computejslog int fatou.gp in earlier much less complete experiments on the 1-cyclic mapping between jslog and Kneser's slog.

This example samples the complex plane along the dotted line green line, and by a reflection, the complex conjugate in the lower half of the complex plane.  I have chosen z0rgt so that these samples optimize the accuracy over the unusual peanut shaped convergence of jslog(z) that previous posts have discussed.  Here, z0rgt is the rightmost point of the set of points represented by the green dotted line.  These are equally spaced samples of the complex valued superfunction \( \alpha^{-1}(\alpha(\text{z0rgt})-1)...\text{z0rgt} \).  

\( \theta_{Sj}(z)\approx\text{jslog}(\alpha^{-1}(z))+\theta_{Rj}\left(\text{jslog}(\alpha^{-1}(z))\right)-z;\;\; \) equation for thts sample approximation points using thtr
   

The region of convergence matches the peanut shaped convergence of the 
\( \text{slogthtr(z)}=\text{jslog}(z)+\theta_{Rj}(\text{jslog}(z)) \)
Here is a plot from -1 to +2 real comparing with Kneser's slog \( \frac{10^{-57}}{\text{kslog}(z)-\text{slogthtr(z)} \)
   

This is  the region of convergence of slogthts, which uses the thts \( \theta_{Sj} \) theta mapping we just calculated.  It converges best where slogthtr(z) doesn't converge as well, in the upper part of the complex plane.  
\( \text{slogthts(z)}=\alpha(z)+\theta_{Sj}(\alpha(z)); \) 

Here is a plot from -1 to +2 real, +1.3imag to imag(0) of \( \frac{10^{-57}}{\text{kslog}(z)-\text{slogthts(z)} \)
   

These two function, slogthtr near the real axis, and slogthts in the upper part of the complex plane can be stitchted together.  I call this function jslogtht(z), which also extends  slogthtr near the real axis by using the approximation slogthtr(z)~=slogthtr(exp(z))-1 and slogthtr(z)=slogthtr(log(z))+1.  Then by using the Schwarz reflection over the real axis, we have extended the original jslog accurate to 16.5 decimal digits to a very accurate slog accurate to  8E-57 over the entire complex plane!!!  Moreover, this function is mostly accurate to better than 57 decimal digits everywhere in the complex plane except near the z0rgt stiching.  In the plot, you can see the theta mappings of both approximations where the two functions are stitched together.  My program calls this function "jslogtht".  Here is a plot from -1 to +2 real, +1.3imag to to -1.3imag of
\( \frac{10^{-57}}{\text{kslog}(z)-\text{jslogtht(z)} \)
   

I also did some higher precision calculations with up to computejslog(360), which does require more memory.  The routine is spending slightly more time on the theta mapping then on the original computejslog routine, but they both take approximately the same amount of time.  With this mapping results are accurate to within 1E-153 over the entire complex plane for jslogtht(z).
\p 250
computejslog(360);  /* 6.1 seconds */
calcthtr(92,190,z0rgt); /* 6.3 seconds */
jslogtht(0.5) /* accurate to within 2E-157 */
jslogtht(z0rgt) /* accurate to 1E-153 at z0rgt stitching; within 2E-155 or better everywhere else in the complex plane */

In general, the precision of the resulting slog is limited by the error term:

\( \text{jslog}(\text{z0rgt})-\text{jslog}(\ln(\text{z0rgt}))-1 \)
For computejslog(120), this limiting value is  4E-57; for computejslog(360) the limiting value is 4E-154.  For the 800 term 450 decimal digit series Jay posted, I use a different z0rgt that is on the same line as above but optimized around slog(0) instead of using the peanut shaped region.  Accuracy is limited to 2E-244, and my program generated such a thtr result too, which was also verified.
- Sheldon


Possibly Related Threads…
Thread Author Replies Views Last Post
  Quickest way to compute the Abel function on the Shell-Thron boundary JmsNxn 1 1,292 10/10/2023, 05:17 AM
Last Post: leon
  Terse Schroeder & Abel function code Daniel 1 1,116 10/16/2022, 07:03 AM
Last Post: Daniel
  The Promised Matrix Add On; Abel_M.gp JmsNxn 2 2,884 08/21/2021, 03:18 AM
Last Post: JmsNxn
  An incremental method to compute (Abel) matrix inverses bo198214 3 16,153 07/20/2010, 12:13 PM
Last Post: Gottfried
  A note on computation of the slog Gottfried 6 19,557 07/12/2010, 10:24 AM
Last Post: Gottfried
  Improving convergence of Andrew's slog jaydfox 19 52,473 07/02/2010, 06:59 AM
Last Post: bo198214
  intuitive slog base sqrt(2) developed between 2 and 4 bo198214 1 7,396 09/10/2009, 06:47 PM
Last Post: bo198214
  SAGE code for computing flow matrix for exp(z)-1 jaydfox 4 16,939 08/21/2009, 05:32 PM
Last Post: jaydfox
  sexp and slog at a microcalculator Kouznetsov 0 5,885 01/08/2009, 08:51 AM
Last Post: Kouznetsov
  Convergence of matrix solution for base e jaydfox 6 17,767 12/18/2007, 12:14 AM
Last Post: jaydfox



Users browsing this thread: 1 Guest(s)