Revisting my accelerated slog solution using Abel matrix inversion
#1
Happy New Year!

A few weeks ago, I started getting interested in revisiting my accelerated slog solution.  The basics of my method are discussed in this thread:
Improving convergence of Andrew's slog

A note on that thread.  The inverse Abel matrix approach was not discovered by me.  I don't know for sure who first used and published the method, but I'm reasonably certain that my first introduction to the method was through Andrew Robbins, so I give him the credit.  As far as I know, however, I was the first to try to accelerate convergence with the method described in the linked thread.  Indeed, I don't think I've seen it described anywhere else.  If someone knows of another person using this method, please let me know!

Anyway, I bring this all up, because I am once again playing with this solution.  I've modified my matrix solver to use Gaussian elimination in order to perform an LU decomposition.  Due to memory constraints, I don't actually save the L matrix, and I've modified the code to swap rows of the U matrix to disk.  This has allowed me to solve larger and larger systems, without RAM becoming a bottleneck.  The downside is a 20%-40% reduction in speed.  (I'm fortunate to have a SSD disk.  The performance might not be as tolerable on a traditional hard drive.)

I've managed to solve a 4096x4096 system with 7168 bits of floating point precision.  It took about six days.  Based on previous estimates of the convergence and system precision loss, the final result should have over 1500 bits of precision, which is far more than needed (accuracy is probably on the order of 80-100 bits for practical applications).

I will post the SAGE code soon.  The forum isn't letting me attach it, so I might need to change the file name/format.  At any rate, it needs to be cleaned up a bit.

However, I will attach the first 1,000 coefficients of my accelerated 4096x4096 matrix solution.  Why only a thousand?  Well, two reasons.  First of all, based on some meta-analysis I've done (which I'll post in a few weeks when I have the time to write it up properly), I believe only the first 600-700 terms are accurate anyway.  For example, if you were wanting to perform a series reversion to get the sexp function, you would pretty much get "noise" after about the 600th term.

The second reason is that the forum is limiting me to 200kB file size for attachments.  The full 4096 terms was well over the limit.  So I had to truncate the file anyway.  As long as I was truncating it, I figured I'd limit it to something reasonable.

Please note, these coefficients are for the power series for the slog residue (i.e., the slog function minus the logarithms at the primary fixed points).  I know, "residue" is already a well-defined mathematical concept.  I still don't have a better name for it yet.  It's what I called it in my original posts on this subject over 11 years ago, so there's no sense changing the name flippantly.  I will work on finding a better name.  Anyway, calculate the taylor series for the logarithms at the primary fixed points, and add it to this series.  For "practical" calculations, just use the logarithms, and add this function.  I'll maybe devote a forum post later to demonstrate.


Attached Files
.txt   slog_coeffs_4096_acc_1000terms.txt (Size: 81.91 KB / Downloads: 710)
~ Jay Daniel Fox
#2
Hi Jay - that's a surprise to read you here again. Happy new year to you too!                      

Just for short at the moment:                        

Regarding the question of a name for the slog-without-log: how about similar to the "complete", "upper incomplete" and "low incomplete" gamma-function? Reading such a term would give me immediately the right mental association.             

If you are missing a space where you can upload a larger file (of tetration-data or near-related) I could reserve some area on my webspace with subdirectory according to your name.             

Gottfried
Gottfried Helms, Kassel
#3
Hey Jay, welcome back!

FYI:
The forum has allows different file types (recognized by suffix) with different sizes. 
I added a new file type ".sage" for attachments with 1MB size restriction.
I increased the file size for .txt files from 200KB to 1MB.
#4
I'll attempt to post the SAGE code.  It still needs some cleanup, and I'm actually planning to redo to the OOC solver.  Please note, for smaller system sizes (i.e., systems which easily fit in RAM), the "SpaceSaver2" algorithm is probably the fastest.  (The full name of the method in the library is "Solve_System_Gauss_SpaceSaver2").  If you want to try the library and have questions, I'll try my best to answer.

Please note, I'm using this code in the Windows executable version of SAGE.  I still have an old copy of the SAGE appliance that runs on Linux in a VMWare session, but I can't get it working with the latest version of the VMWare Workstation Player.

I'll try to convert this code to pari/gp code at some point.  This will allow me to compare the performance and precision, and it will also ensure that I'll have access to the code in a second language, e.g., just in case SAGE were to stop development in the future.  If I can get access to Mathematica, I will also try to port it into that language.

My next project is to rewrite the OOC solver, so that it scales a little better, and so I can save the L matrix.  Currently I only save the U matrix.  I can do back-substitution with the U matrix.  I need the L matrix to do forward substitution.  For my current methods, I do the forward substitution during the LU decomposition phase, which is why I can get away with tossing the L matrix.  However, my next phase of research will require me to test lots of vectors, so I need the ability to perform forward substitution.  Hence I need the L matrix.


Attached Files
.sage   slog_accel_lib_e_v6.sage (Size: 12.54 KB / Downloads: 679)
.sage   test_ooc.sage (Size: 603 bytes / Downloads: 689)
.sage   test_bsub.sage (Size: 493 bytes / Downloads: 710)
~ Jay Daniel Fox
#5
Also, here is a copy of the full 4096-term accelerated solution, at 256 bits of precision.  As I noted before, the terms start to become inaccurate after about 600-700 terms, so the remaining terms are mostly there for completeness, and to validate the solution.


Attached Files
.txt   slog_coeffs_4096_acc.txt (Size: 337.4 KB / Downloads: 654)
~ Jay Daniel Fox
#6
(01/07/2019, 01:40 AM)jaydfox Wrote: Also, here is a copy of the full 4096-term accelerated solution, at 256 bits of precision.  As I noted before, the terms start to become inaccurate after about 600-700 terms, so the remaining terms are mostly there for completeness, and to validate the solution.

Hey Jay, welcome back. I've always been fascinated by your accelerated matrix solution and I 've always wondered how stable the solution is.  I use the same accelerator to cancel out a lot of the slog singularity at L and thereby speed up computation.  However, other than the speedup, I use a completely different slog algorithm in my fatou.gp program.  I also included an implementation of your accelerated Matrix solution in my program, although it is limited by pari-gp's memory limitations.  I've typically generated your matrix solution for a 100x100 matrix which is only a 16-17 decimal digit solution; its been awhile since I experimented with it.

When I used your accelerated solution to improve my fatou.gp abel function convergence, I center my solution between the two fixed points, so the results aren't immediately comparable.  To help compare the results, I re-sampled  my solution, in an attempt to match your accelerated solution, centered at x=0.   

Here are the first 40 taylor series term of the result, printed to 78 decimal digits of precision, centered at 0.  I used fatou.gp to generated a solution which should match Kneser's Riemann mapping solution to about 111 decimal digits.  Then I resampled and used 1024 equally spaced points on a unit circle centered at zero so the resample should have all of the precision of the 111 decimal digit solution.  Term x^692 was 10^-111, so I could post 692 terms if interested for comparison if interested.  The results match your post at the beginning of this thread for about 25-26 decimal digits; which is interesting and way too good to be due to chance, but the differences will require more investigation... edit: perhaps something like this explains what's going on where theta is a 1-cyclic function.
\( \text{JaySlogSeries}(z)\approx\text{KneserSlog}(z)+\theta(\text{KneserSlog}(z))\;\;\;??? \)
Such a theta mapping might explain why Jay's computation is internally consistent to 450 decimal digits, of which Jay posted ~78 decimal digits here on the forum.  Perhaps one could calculate such a theta mapping to help explain what's going on; more work is required Smile  Monday evening I'll post some updates on how to bring Jay's slogseries into fatou.gp ...
Code:
{jaykneserslog=
+x^ 1* -0.0291847169160739298766307382174481294886258010492417221318557874529766446280271
+x^ 2*  0.00110090814344297056968002550433916758416294801484763606535323215947742312998891
+x^ 3*  0.000543879513462731464670816527227773747790273366131704879288264439117105813938343
+x^ 4* -0.000203213036542027025247546663475145270021938728061876580382634944327259507613070
+x^ 5*  3.22280652811476939965291611214222619698404045553233990135743756086579853787466 E-6
+x^ 6*  1.84668813053769065513146911482491187292420670063380010345839003346342341354811 E-5
+x^ 7* -2.55239021146659093615783219124309045163219508036433185642511568187908692740179 E-6
+x^ 8* -2.17351637826570837729804963941797089491977146333679699165249014512560057413400 E-6
+x^ 9*  4.33349226539374888910868041733179237880862838402420299817266589574554488350356 E-7
+x^10*  3.59008961404512998761419264561270511400560807428712691206276402209143971317407 E-7
+x^11* -6.43927348279012903879559091201250729509557812545956649627961806138194783640006 E-8
+x^12* -7.58187804421757561572052356032477063125979811760713129502174141007384371312194 E-8
+x^13*  7.30188108425733435064094181207366720162958780594310571075880451665411307408281 E-9
+x^14*  1.83129209024872042204766971560789608994275974103846108186858008329147982370143 E-8
+x^15*  4.01473257932598032346226007262089210881377546684696208058213238500627295459535 E-10
+x^16* -4.67885672554722345567795420931402756716458148476094002591681241111939059855612 E-9
+x^17* -7.82733541235021758641760171001880700472394347501908856378580767748054180877594 E-10
+x^18*  1.19380072264340389022833263428990612278791993921818383464545223232835517887038 E-9
+x^19*  4.22665522239028379306756941809347307667096404974011552334418245250598871959460 E-10
+x^20* -2.85665608816080013429919548538029002477867622159882429937211279129282931350602 E-10
+x^21* -1.81574577166171723785466296557378162670229177144708098538943655575248286391841 E-10
+x^22*  5.66239713718898883906312180713639755874555979636208960430143220041561095439608 E-11
+x^23*  6.96852050843926611038308842481707217583284225194934498017797384941774673015590 E-11
+x^24* -4.91620352304397459943356547431415494679778546116207934625863282799255486528870 E-12
+x^25* -2.44941717030514599798779423273779319972876052437616659420443888252926158412673 E-11
+x^26* -3.42123870391631441053377789422079448618314584584920132740140702413291174523968 E-12
+x^27*  7.81601611754706608693289451350701814359130060609483928174907844754635967785938 E-12
+x^28*  2.91220724036209763215394808304886606041356251273619264186884692847430324022217 E-12
+x^29* -2.16293011912428340286672961802220042913349279225855648441601827627055552941372 E-12
+x^30* -1.54770233704316926732425106606959328377206243330542890682121857904957483929056 E-12
+x^31*  4.42938340560344887035834315164496119841728421490728778076978332679969734227954 E-13
+x^32*  6.79973546377122236399487539221594812310631216618695038174924709985643998954290 E-13
+x^33* -6.90332802128681628839824907472046040540394965670779559600694012846196711125779 E-15
+x^34* -2.60582710142232072109007134613071747204873818687668425522691018054763266928324 E-13
+x^35* -6.01823146772145101019064836071571136694904735629484078612466275998052975606794 E-14
+x^36*  8.64299131187677287397417683498580770292146371585971227254498127557791367936589 E-14
+x^37*  4.48362256100150041731885432251130944047255099679585467364604096135948842001883 E-14
+x^38* -2.29484574790082762339099709778899822955675262384458506531727251403906417766592 E-14
+x^39* -2.35822617237345894785316721974155376877119178761358093987952559243891279892858 E-14
+x^40*  3.22266790018566996903291120532225424437815251483935826863343561533048881414813 E-15}
- Sheldon
#7
(01/07/2019, 05:14 AM)sheldon wrote: Wrote: The results match your post at the beginning of this thread for about 25-26 decimal digits; which is interesting and way too good to be due to chance, but the differences will require more investigation... edit: perhaps something like this explains what's going on where theta is a 1-cyclic function.
\( \text{JaySlogSeries}(z)\approx\text{KneserSlog}(z)+\theta(\text{KneserSlog}(z))\;\;\;??? \)
Such a theta mapping might explain why Jay's computation is internally consistent to 450 decimal digits, of which Jay posted ~78 decimal digits here on the forum.  Perhaps one could calculate such a theta mapping to 
The \( \text{KneserSlog(z)+\theta(\text{KneserSlog(z)) \) equation looks like it is a very good approximation of Jay's slog, but the amplitude of the primary terms at O(10^-27) seems larger than one would have expected.  The \( \theta \)=jaytheta fourier series below is accurate to 10^-76 near the real axis, and within 2*10^-69 inside a unit circle around z=0.  It has 17 terms from exp(-8*Pi*I*z)..exp(8*Pi*I*z), where the terms come in complex conjugate pairs.

Are there any other experiments we could try to figure out what's going on in Jay's slog matrix, and why it is coming up with a nearly 1-cyclic mapping of Kneser's slog?

The other thing is that I probably need to explain how my fatou.gp slog uses the Schroeder function along with a different 1-cyclic \( \theta(z) \) mapping to guarantee convergence with Kneser's slog, but I don't want to go off topic.
Code:
{ jaytheta(z) =
( 3.5728276456649148477566116177388023019007268196783349159893701815277141067225 E-95  
 -1.3649661115074318758359394184069459157348183079287255024123555076990915418413 E-95*I)*exp(z*2*Pi*I*-8) +
(-1.5440810246150882539482350614654446227922901045111993369886391972973287458105 E-86  
 -5.5669023559216323925345216438025695019598405830531829949210261661331938210694 E-87*I)*exp(z*2*Pi*I*-7) +
( 3.7086748176968958397009105825753164147304702764047775074802376855971837334383 E-78  
 +1.1451582025980524236616989591346313817219047309050558224544165806126816293184 E-77*I)*exp(z*2*Pi*I*-6) +
( 1.1210614871772847624343387516545807440942656652432663519744124767146307448107 E-68  
 -1.1470591130474478937181194872389544976665521110710100672287841946582901951881 E-68*I)*exp(z*2*Pi*I*-5) +
(-3.8539202891193833420560824550695624902356998189778776872572042272613662988617 E-59  
 -2.1138152531049279432069204778324824829587238753274203674250488900370678846025 E-59*I)*exp(z*2*Pi*I*-4) +
(-1.4211380450705848101841053395855311988186232917200164673339875435393048716064 E-49  
 +2.6322911718861857981740665561538780260969980190539409008516880732957221621375 E-49*I)*exp(z*2*Pi*I*-3) +
( 4.6199048930995315941719898370835132530766708685634714050488164704159381114052 E-39  
 +5.2167577270913630361688519902263191672290188822400996799706708147241652355711 E-39*I)*exp(z*2*Pi*I*-2) +
( 1.0696559455881259713655169257010732622925108009677598645744429747235112837897 E-27  
 +1.9109391105341231522214949744300521979800047513277681914525934159682392769201 E-28*I)*exp(z*2*Pi*I*-1) +
 -2.1393118911854917525169486869813993088913918409366654477997123124873055208004 E-27  +
( 1.0696559455881259713655169257010732622925108009677598647891077428436746514155 E-27  
 -1.9109391105341231522214949744300521979800047513277681928987942986883883045070 E-28*I)*exp(z*2*Pi*I) +
( 4.6199048930995315941719898370835132530766708700408536630674514450527650792040 E-39  
 -5.2167577270913630361688519902263191672290188809445724819576941574423230216014 E-39*I)*exp(z*2*Pi*I*2) +
(-1.4211380450705848101841053395855311988566716857215934631581381619210748914954 E-49  
 -2.6322911718861857981740665561538780261149441871658727620111922865086449876717 E-49*I)*exp(z*2*Pi*I*3) +
(-3.8539202891193833420560824550691649153570853618694845312416706112027037826147 E-59  
 +2.1138152531049279432069204778267325506190281853467293590631743118540833037669 E-59*I)*exp(z*2*Pi*I*4) +
( 1.1210614871772847624343805343719207388438541669933487597084797186218859963216 E-68  
 +1.1470591130474478937181088034863740700773115622928860057550724684519494199295 E-68*I)*exp(z*2*Pi*I*5) +
( 3.7086748176968966527644112254786062183031494760821947404652325628506661872837 E-78  
 -1.1451582025980529478042915261001026578076002314767359370661049975563219165253 E-77*I)*exp(z*2*Pi*I*6) +
(-1.5440810162317820257286076540777670544911171119974240869791763102058717224083 E-86  
 +5.5669023741799632403812586867225091698768319497226142393990713274070953349022 E-87*I)*exp(z*2*Pi*I*7) +
( 3.4996277197133582104849492242489945468307342754067132866754747304481619780121 E-95  
 +1.3265185330275725639281119595111317055761552257026709292842321829759362977176 E-95*I)*exp(z*2*Pi*I*8);
}
- Sheldon
#8
(01/08/2019, 06:14 PM)sheldonison Wrote: In Jay's slog matrix, if one replaces Jay's speedup with the exact version via Kneser, but only for terms not in Jay slog matrix computation, than the only solution left for Jay's matrix to find is Kneser's slog, right?  Of course, this assumes infinite precision.  Wouldn't 1500bits approximate infinite precision????  I think some more experiments would be required here.  As I recall, Jay's 100 term matrix needs about 67 digits of computation precision to get 16 or 17 decimal digits of result accuracy.  I didn't experiment with larger matrices.  But lets say I used fatou.gp to feed in a speedup for  900 terms series for x^101 .. x^1000 accurate to 150 decimal digits or so.  Then how many digits matching Kneser could I get out of Jay's slog matrix for a 100x100 matrix?  And how much computation precision would I need?

Hi Sheldon,

I have to admit, I haven't looked at your solution in years, and even then, I didn't fully understand it.  I don't want to bias my work towards yours.  I would rather let my work evolve on its own and see where it takes me.  Please don't take that wrong, it's nothing personal, but I will probably wait a while before I make another attempt at understanding your solution.  Part of it is also the quest for knowledge, and knowledge like this feels more "earned" when it is derived through hard work and perseverance.

I did something similar when I was working on my Kneser solution (which probably used a very different method of solving the Riemann mapping than you use, because convergence was painfully slow for my method, and your method seems to converge very quickly).  I didn't want to influence my work to make my Kneser solution match my accelerated matrix solution for the Abel function, so I specifically used very naive approximations of the sexp function to initialize my data.  I've lost my Kneser code, but I lost it once before and recreated it, so I'm sure I can do it again.

Regarding the very high internal precision and relatively low accuracy of my solution: I don't think it's necessarily a result of a theta function to perturb the results.  I think it's simply an artifact of the slow convergence.  If you solve the (unaccelerated) Abel matrix for a simple iteration like 2x+1 (which has an exact solution of log_2(x+1)), you get a similar situation.  High internal precision, but low accuracy.  In the example of 2x+1, the solution appears to be converging on the exact solution.  My hope (which is yet unproven) is that the Abel matrix solution for the slog (whether accelerated or not) should converge on the "exact" solution, i.e., the Kneser solution.
~ Jay Daniel Fox
#9
This is a second post analyzing Jay's slog vs Kneser.  \( \text{JaySlog}(z)=\text{JayTaylor}(z)+\frac{\ln(z-L)}{L}+\frac{\ln(z-L^*)}{L^*};\;\;\;L\approx1.337i+0.3181 \)  L is the fixed point.  By judiciously choosing how to take the 1-cyclic Fourier series of JaySlog(SexpKneser(z))-z, I was able to get two additional accurate terms to the 1-cyclic Fourier series from earlier, for eight complex conjugate pairs of terms plus the constant; (see previous post: updated jaytheta function). Without the theta Fourier series, Jays slog matches Kneser to about 27 decimal digits at the real axis.  With the theta mapping, the KneserSlog+theta(KneserSlog) approximation matches Jay's slog series to >76 decimal digits over most of a circle of radius 1.1, except for a byte taken out of the circle near both L and L*.    This image is a color contour plot of \( \frac{10^{-72}}{\text{JaySlog}(z)-\text{KneserSlog}(z)-\theta(\text{KneserSlog})} \)  The main region matches Jay's slog series to better than 76 decimal digits inside most of the circle!  The results are so good that for all intents and purposes, the two functions are the same, except near the two singularities as can be seen from the image.  Even near the singularity but before it turns black, at z=i, the two match each other to within 5E-74.  The plot turns black when consistency is below ~70 decimal digits and consistency is better than 5E-64 everywhere in the circle of radius 1.1.  As before, I am using the first 692 Taylor series coefficients of Jay's slog, where a_692~=2E-111.
   

So, is there any way to know that Jay's slog doesn't match Kneser's slog?  How would one know if Jay's slog is correct and Sheldon's program gave the incorrect slog?  It turns we can also compare any slog valid over enough of the complex plane to the inverse Schroder function, where the Schroder function is turned into a complex valued Abel function \( \alpha(z) \) for base(e) exponentiation.  And its inverse \( \alpha^{-1}(z) \) is a superfunction for base e, which is also complex valued.  In the plot below, we can see the inverse Schroder function extending out from the two complex valued fixed points.  Superimposed is a circle of radius abs(L), which is the maximum possible radius of convergence for the slog since both Kneser's slog and Jay's slog functions have a singularity at the fixed point.  The smaller circle has a radius of 1.1 within which Jay's slog was consistent to ~76 decimal digits with a theta mapping discussed earlier; except for near the singularity.  The green dots represent a sequence of sample points where we take the 1-cyclic Fourier series of \( \text{JaySlog}(\alpha^{-1}(z))-z \).    Here, \( \alpha^{-1}(z) \) is a complex valued superfunction for base(e) developed at the fixed point of L.  This superfunction is the inverse of the complex valued Abel where S(z) is the Schroder function and \( \alpha(z)=\frac{\ln(S(z))}{L} \).  
   


If we subtract out the constant term, the resulting 1-cyclic function has an amplitude of about 0.0009, and looks visually like a simple 1 term 1-cyclic exponential, but of course it is much more complicated than that.  The details of the 1-cyclic function are critical to understanding the behavior of the slog in the complex plane.  So let us analyze both the 1-cyclic theta Schroder function mapping for Jay's slog, and for Sheldon's slog, where Sheldon's slog is also 692 terms printed with 77 decimal digits of precision centered at 0 and using the same format as Jay's slog.  
   

For simplicity I am using the substitution \( y=\exp(2\pi i (z-z_0)) \) where the 1-cyclic Fourier series was taken on 60 equally spaced samples between \( \alpha^{-1}(z_0)\;...\;\alpha^{-1}(z_0+1) \) My \( z_0 \) definition is somewhat random.  Any number whose sexp is is in the well behaved range will do.  \( z_0=\alpha(\text{sexp}(-1.2+\frac{-\ln(100)}{2\pi i}))\approx\alpha(\text{sexp}(-1.2+0.733i))\approx\alpha(-0.0161+0.75768i) \)

For brevity; I'm only printing the terms accurate to 25 decimal digits even though I calculated them to much higher precision.  Remember that \( y=\exp(2\pi i)(z-z_0) \).  The terms in y, y^2, y^n will decay to zero in the limit as \( \Im(z)\to\infty \).  These are very similar to the well behaved terms we expect in Kneser's slog.  But the terms in y^-1, y^-2, y^-n grow in amplitude as \( \Im(z)\to\infty \) and these are the terms that tell us that Jay's slog is not Kneser's slog.  The error term is dominated by the  y^-1 whose magnitude is approximately 10^-25.  This term will improve to 10^-27 near the real axis, but will misbehave as we get closer to the singularity at L.

Code:
{JaySchroderTheta=
        0.4884150884437966033074146 + 0.9223068629260569360028236*I
+y^ 1* ( 0.0008527801667562909114972082 + 0.0003125556838491044418014920*I)
+y^ 2* (-9.708080486410223227916803 E-7 + 9.710323531077750244488204 E-7*I)
+y^ 3* (-5.878024574251115606103497 E-10 - 1.978287506966856063528219 E-9*I)
+y^ 4* ( 3.001780705849975137758605 E-12 + 7.045220893442265392671639 E-13*I)
+y^ 5* (-3.208477845886844079339461 E-15 + 3.530961767417970849545504 E-15*I)
+y^ 6* (-2.624381171326872906024063 E-18 - 7.335020072903456702338085 E-18*I)
+y^ 7* ( 1.321074202132256455724085 E-20 + 1.903016287529692366359941 E-21*I)
+y^ 8* (-1.402639507282101778108773 E-23 + 1.895166893801443575271719 E-23*I)
+y^ 9* (-1.772669741397395815497444 E-26 - 3.843052544312580188258504 E-26*I)
+y^10* ( 7.645694514460890241495210 E-29 + 5.552243517074882110745036 E-30*I)
+y^11* (-7.631015450702930275457768 E-32 + 1.172015128721544589139540 E-31*I)
+y^12* (-1.209607734584374483299469 E-34 - 2.267112332210102955224326 E-34*I)
+y^13* ( 4.753532085143246163488097 E-37 + 5.924245742645747236000230 E-39*I)
+y^14* (-4.356128499530166755058963 E-40 + 7.703675719233272983112047 E-40*I)
+y^15* (-8.714832983894441953540382 E-43 - 1.408639493017801276231409 E-42*I)
+y^16* ( 3.110276412626715226311838 E-45 - 1.613579774192629849189697 E-46*I)
+y^17* (-2.559407501923749769404915 E-48 + 5.292507245138397232431039 E-48*I)
+y^18* (-6.452498657290520087576675 E-51 - 9.075146965735464393804413 E-51*I)
+y^19* ( 2.101441013456621272587330 E-53 - 2.444372952775288835443245 E-54*I)
+y^20* (-1.519200972127019065550623 E-56 + 3.733376757028580035512240 E-56*I)
+y^21* (-4.853204347358686770736136 E-59 - 5.975957490470546490388492 E-59*I)
+y^22* ( 1.449959209297115275635694 E-61 - 2.636213849095307318093880 E-62*I)
+y^23* (-8.957079055517393516146350 E-65 + 2.681409149566454402820860 E-64*I)
+y^24* (-3.688379555089395707123036 E-67 - 3.989745448215004096507666 E-67*I)
+y^25* ( 1.015123824848710522724926 E-69 - 2.526500725632739298899299 E-70*I)
+y^26* (-5.152996566028510960117872 E-73 + 1.950331086549317835189971 E-72*I)
+y^27* (-2.822342478896528913521354 E-75 - 2.685431212030569314974869 E-75*I)
+y^28* ( 7.178220565561244820058272 E-78 - 2.275652974604442218826640 E-78*I)
+y^29* (-4.049403841842832213780212 E-81 + 2.266443088077442119730075 E-80*I)
+ ...
+y^-10*(-2.498280149110708915077648 E-81 + 1.652447658636857959861455 E-80*I)
+y^-9* (-2.756326912393882620142116 E-81 + 1.817695787154244726071448 E-80*I)
+y^-8* ( 3.516799343332071483243507 E-79 + 1.129899714092047963753501 E-79*I)
+y^-7* (-1.570613227969063404295141 E-72 + 3.924015122582248643571331 E-73*I)
+y^-6* ( 9.849786695584270675395796 E-66 - 6.670060862127669849687576 E-66*I)
+y^-5* (-1.140067387869215638195915 E-58 + 1.105707324092836584333242 E-58*I)
+y^-4* ( 3.239403384690151114449161 E-51 - 2.919774220191775456469721 E-51*I)
+y^-3* (-2.701367092076952153394282 E-43 + 1.243406536013408682927275 E-43*I)
+y^-2* ( 6.760727404214144941914633 E-35 + 1.571472569348109887119765 E-35*I)
+y^-1* (-1.427564270910666936171602 E-26 - 1.075015698770442630959924 E-25*I)

For comparison; here is Sheldon's version of Kneser's slog Fourier series, which behaves as expected.  In doing numerical calculations, I truncate and ignore all of the terms with y^-n as numerical calculation noise (whose values are<1E-80) for Kneser's slog.  As the slog approaches the fixed point, \( \Im(z)\to\infty;\;\;\;y=\exp(2\pi i z)\to 0 \) and the Fourier series goes to a constant and Kneser's slog approaches the Schroder function slog.  This means, if we have a Kneser slog accurate ~77 decimal digits in the red circle radius=1.1 (as posted), then we can extend it to everywhere in the complex plane.  When \( \Im(\alpha(z))>\Im(z_0) \) then \( \text{slog(z)}=\alpha(z)+\theta(\alpha(z)) \) will similarly be accurate to >76 decimal digits.  

Why does Jay's slog choose the theta mapping that it chooses, rather than for example Kneser's slog solution?  If we had a 4200 term matrix, the results would have been equally self consistent.  Would they have matched Jay's result with a 4000 term matrix to 75 decimal digits?  I imagine the theta mapping in Jay's slog gradually becomes more and more well behaved as the number of terms used in the Matrix calculation approaches infinity, but that's about all we can guess.
Code:
{KneserTht=
        0.4884150884437966033074146 + 0.9223068629260569360028236*I
+y^ 1* ( 0.0008527801667562909114972082 + 0.0003125556838491044418014920*I)
+y^ 2* (-9.708080486410223227916804 E-7 + 9.710323531077750244488204 E-7*I)
+y^ 3* (-5.878024574251115606103495 E-10 - 1.978287506966856063528219 E-9*I)
+y^ 4* ( 3.001780705849975137758605 E-12 + 7.045220893442265392671649 E-13*I)
+y^ 5* (-3.208477845886844079339465 E-15 + 3.530961767417970849545504 E-15*I)
+y^ 6* (-2.624381171326872906024058 E-18 - 7.335020072903456702338094 E-18*I)
+y^ 7* ( 1.321074202132256455724086 E-20 + 1.903016287529692366359967 E-21*I)
+y^ 8* (-1.402639507282101778108782 E-23 + 1.895166893801443575271721 E-23*I)
+y^ 9* (-1.772669741397395815497438 E-26 - 3.843052544312580188258528 E-26*I)
+y^10* ( 7.645694514460890241495263 E-29 + 5.552243517074882110745464 E-30*I)
+y^11* (-7.631015450702930275457929 E-32 + 1.172015128721544589139547 E-31*I)
+y^12* (-1.209607734584374483299469 E-34 - 2.267112332210102955224372 E-34*I)
+y^13* ( 4.753532085143246163488202 E-37 + 5.924245742645747236005303 E-39*I)
+y^14* (-4.356128499530166755059191 E-40 + 7.703675719233272983112227 E-40*I)
+y^15* (-8.714832983894441953540533 E-43 - 1.408639493017801276231479 E-42*I)
+y^16* ( 3.110276412626715226312007 E-45 - 1.613579774192629849189277 E-46*I)
+y^17* (-2.559407501923749769405186 E-48 + 5.292507245138397232431359 E-48*I)
+y^18* (-6.452498657290520087577073 E-51 - 9.075146965735464393805332 E-51*I)
+y^19* ( 2.101441013456621272587567 E-53 - 2.444372952775288835443119 E-54*I)
+y^20* (-1.519200972127019065550898 E-56 + 3.733376757028580035512723 E-56*I)
+y^21* (-4.853204347358686770736722 E-59 - 5.975957490470546490390507 E-59*I)
+y^22* ( 1.449959209297115275649282 E-61 - 2.636213849095307319005724 E-62*I)
+y^23* (-8.957079055517393386050392 E-65 + 2.681409149566454313657751 E-64*I)
+y^24* (-3.688379555089382930300230 E-67 - 3.989745448215091595910955 E-67*I)
+y^25* ( 1.015123824849969112322006 E-69 - 2.526500725718867043243022 E-70*I)
+y^26* (-5.152996553594352134719516 E-73 + 1.950331078046532757613616 E-72*I)
+y^27* (-2.822341246957428036475944 E-75 - 2.685439630379945122000459 E-75*I)
+y^28* ( 7.179444561035288527697040 E-78 - 2.284011229705630628906773 E-78*I)
+y^29* (-2.829930076922271673723029 E-81 + 1.434277383231040198217646 E-80*I)
+y^30* (-3.588756110041497691714038 E-83 + 1.032843579771145257285948 E-83*I)
+...
+y^-4* (-6.954413041155221807894541 E-83 + 1.315319285416430611784253 E-82*I)
+y^-3* (-9.285463472073180073827514 E-83 + 1.723366120220390227761641 E-82*I)
+y^-2* (-1.398954602589024216439116 E-82 + 2.505664252483493625419186 E-82*I)
+y^-1* (-2.804523765697162374597977 E-82 + 4.582257135192752691376739 E-82*I)}
- Sheldon
#10
(01/17/2019, 06:44 PM)sheldonison Wrote: I have no idea why or how Jay's slog chooses a different less well behaved theta mapping.  I imagine the theta mapping in Jay's slog gradually becomes more and more well behaved as the number of terms used in the Matrix calculation approaches infinity.

Wow, thanks for the analysis!  I still haven't attempted to get into the details of your slog solution and how you calculate the Kneser mapping, because I'm still trying to understand my solution.  I mean, I understand it at a high level, but I keep seeing patterns and trying to tie them together.

But I think I have an answer to your question above, and unfortunately, it's not a very exciting answer.  Brace yourself:

The first term in my Taylor series is inaccurate.

I know, it's not very exciting.  But here's the thing.  My accelerated slog solution is internally consistent to hundreds of decimal places.  But the first term of the Taylor series has an error on the order \( O\(2^{{-7} \log_{2}\(n\)}\) \), for a matrix of size nxn, based on my analysis of matrix solutions for 2x2 up to 1024x1024.  Indeed, see the attached graph, showing the log_2 of the matrix size on the x-axis, and the log_2 of the error on the y-axis.

   

Red and blue indicate a positive or negative error (I made that image a couple weeks ago, so I don't remember now which color is positive).  As you can see, the error oscillates, and the black line is an estimate of the maximum error, with a slope really close to -7, and hitting 72 bits at 10=log_2(1024).  Presumably then, the first term in my 4096 solution has an error of about 2^(-86).

At any rate, let's do the thought experiment.  It's internally consistent to hundreds of decimal places near z=0 and z=1, but the slope at z=0 and z=1 is off by about 10^-27, and hence the slope at sexp(0) and sexp(1) will be off by a similar amount.  This means that the sexp (i.e., the reversion of my slog series) will be off by a (smooth) 1-cyclic function that is equal to 0 at the integers, but which has a slope with a magnitude of about 10^-27.  A simple sine function would do the trick.

But the second and third and fourth terms are also inaccurate, so of course there will be additional harmonics.

I know it's a very boring reason why there's a theta mapping, but there it is.

For me, the theta mapping isn't a big deal.  It doesn't mean that this solution is "wrong".  It just means that convergence is too slow.  To get 70+ decimal digits of accuracy, i.e., 256+ bits of accuracy, I would need to solve a matrix with billions of rows and columns.  Based on my estimate 7 bits per doubling of the matrix size (doubling of rows, so 4x the terms), I would need 2^(256/7) ~= 100 billion rows and columns.  Convergence is just too slow to get the kind of precision that you're using in your solution, even with the acceleration technique I came up with.

The good news is, the acceleration can be taken a step further, by removing the algebraic singularity that remains after removing the logarithmic singularity.  A few years ago, I determined that my accelerated solution appears to be converging on an algebraic singularity in terms of \( (z-L)^{2\pi i / L} \), or rather a power series in such.  (I typically call the fixed point a in my code, but I'm using L here, since that's what you mentioned in your post.)

This makes sense when you look at the Kneser mapping, which involves a mapping from the unit disk to the Kneser region, which itself is an algebraic conformal mapping of the Shroeder mapping.  In other words, when you take the logarithm of the Schroeder mapping, and then multiply that by 2*pi*i/L, and then exponentiate to get the Kneser region, you are effectively mapping (z-L) to (z-L)^(2 pi i / L), so you get a power series in that algebraic term.  The algebraic singularity, with it's complex periodicity, is what allows the Kneser solution to have non-trivial branches (where the value in one branch is not merely a constant difference from the value of the same point in a different branch).  When I was researching this a few years ago, I had moderate success in calculating the first algebraic term (maybe 4-8 digits of accuracy), and then using that to accelerate my solution even more.  However, I've lost all my work from that time period (hard disk failed), so I'm having to build it up from scratch again.  I hope to post something in the coming weeks or months, as time permits.
~ Jay Daniel Fox


Possibly Related Threads…
Thread Author Replies Views Last Post
  Quickest way to compute the Abel function on the Shell-Thron boundary JmsNxn 1 1,242 10/10/2023, 05:17 AM
Last Post: leon
  Terse Schroeder & Abel function code Daniel 1 1,052 10/16/2022, 07:03 AM
Last Post: Daniel
  The Promised Matrix Add On; Abel_M.gp JmsNxn 2 2,842 08/21/2021, 03:18 AM
Last Post: JmsNxn
  An incremental method to compute (Abel) matrix inverses bo198214 3 16,084 07/20/2010, 12:13 PM
Last Post: Gottfried
  A note on computation of the slog Gottfried 6 19,436 07/12/2010, 10:24 AM
Last Post: Gottfried
  Improving convergence of Andrew's slog jaydfox 19 52,268 07/02/2010, 06:59 AM
Last Post: bo198214
  intuitive slog base sqrt(2) developed between 2 and 4 bo198214 1 7,356 09/10/2009, 06:47 PM
Last Post: bo198214
  SAGE code for computing flow matrix for exp(z)-1 jaydfox 4 16,857 08/21/2009, 05:32 PM
Last Post: jaydfox
  sexp and slog at a microcalculator Kouznetsov 0 5,885 01/08/2009, 08:51 AM
Last Post: Kouznetsov
  Convergence of matrix solution for base e jaydfox 6 17,756 12/18/2007, 12:14 AM
Last Post: jaydfox



Users browsing this thread: 1 Guest(s)