Revisting my accelerated slog solution using Abel matrix inversion
#21
(01/30/2019, 05:27 PM)jaydfox Wrote: I've been trying to figure out on my own how to eliminate those coefficients for terms with negative exponents, and my progress has been frustratingly slow.  I was re-reading your post more carefully, and I see you already have found a way.  I will definitely take the time to understand this, because it seems to be key to solving the Riemann mapping numerically.  ...

I'm also playing around with my FFT library, trying to figure out how to cancel out negative terms.  I had made some progress in understanding the problem, but hadn't yet made progress in canceling the terms.  I'm going to go through your post one more time, and see if I can figure it out.

It's been a while, so I wanted to post an update on my progress.

It's actually rather embarrassing.  I had been struggling to find a way to calculate the required Fourier series ("theta" function as Sheldon calls it), so that I could eliminate the terms with the negative exponents in the Riemann mapping of my slog.  (I actually struggled with this a few years ago as well, before I lost all my code.)

I had the ah-ha! moment a few days ago.  And this is the embarrassing part.  Because it's the same problem I originally ran into, when trying to calculate a Taylor series for the super-exponential function (sexp).  The problem is non-linear, such that increasing the system size becomes more and more difficult to solve numerically, even with an iterative solver.  The simple solution to that problem was to solve the inverse of the sexp function, as Andrew Robbins demonstrated.  The solution to the slog is a linear problem, meaning it can be solved with linear algebra, i.e., matrix math.

I have been trying to solve the problem of directly calculating the coefficients of the Riemann mapping, which maps the unit disk to the Kneser region bounded by the transform of the real line.  This is a non-linear problem, because the theta function has to be applied to the input of the jsexp function.

The solution is to solve the inverse operation, mapping the regular slog to the jslog.  In this situation, the theta function is applied to the output of the jslog, which is linear.  Hence, eliminating the coefficients from this mapping is a problem suited to matrix math.  When Sheldon was describing the solution, I was still thinking in the wrong space, which is why I wasn't able to understand.  I was still working on the Kneser space, i.e., exp(2*pi*i*jslog(z)).  I should have been working with jslog(z)-rslog(z), or some similar form.

That was my first ah-ha moment.  But the biggest surprise for me was when I fully understood the effect of solving for the complex conjugate of the negative terms.  Given a real base like base e, the conjugation step is a trivial solution to the rslog of the lower fixed point.  But for a non-real base, there will be two different regular slogs to consider, at the upper and lower fixed points.  I would need to map the jslog for both of those rslogs, and then cancel out the negative terms in both of them simultaneously.  I have to admit that I was extremely amazed and excited when that clicked.  I've been wondering for years how Sheldon managed to combine the upper and lower rslog functions to derive his solution.  It didn't seem to make sense to me before.  Now I understand!

I could have "cheated" and looked at Sheldon's code, but it was far more fulfilling (and frustrating) to work it out on my own.  I'm still developing my code, and I haven't run analysis on the rate of convergence yet.  I'm curious to compare my results to Sheldon's, to see if we're using the same algorithms or not.  I'll post results in the next few days.  But I did want to share my excitement that I've made significant progress.
~ Jay Daniel Fox
Reply
#22
Quote:... the Riemann mapping ... is a non-linear problem, because the theta function has to be applied to the input of the jsexp function.

...The solution is to solve the inverse operation, mapping the regular slog to the jslog.  In this situation, the theta function is applied to the output of the jslog, which is linear.  Hence ... suited to matrix math...

That was my first ah-ha moment.  But the biggest surprise for me was when I fully understood the effect of solving for the complex conjugate of the negative terms.  Given a real base like base e, the conjugation step is a trivial solution to the rslog of the lower fixed point... I've been wondering for years how Sheldon managed to combine the upper and lower rslog functions to derive his solution...  

I could have "cheated" and looked at Sheldon's code, but it was far more fulfilling (and frustrating) to work it out on my own...  

Just some quick clarifications on "Sheldon's" codes.   There are many of them.  
  • kneser.gp and tetcomplex.gp actually do compute the 1-cyclic non-linear theta/Riemann mapping iteratively.  Some of the internal code is really ugly though.  I don't use it anymore ...
  • fatou.gp computes Kneser's slog, , which takes advantage of Jay's accelerated technique but it does not use Jay's slog or Andrew's slog.  I think Jay would call my complex valued Abel function the rslog.  fatou.gp is a completely different iterative algorithm.  fatou.gp works for a very large range of real and complex bases (using both fixed point's schroder functions), and has a default precision of \p 38 which gives ~35 decimal digit accurate results, and initializes for base "e" in 0.35 seconds.  For \p 77, the time increases to ~5 seconds.  Because this is a linear problem, I eventually also got a matrix implementation of fatou.gp working; but the iterative technique has advantages in that the code can figure out the required parameters as it iterates.
  • jpost.gp in post#20, which was developed entirely during the lifetime of this thread, using the Schroder and fixed point functions from fatou.gp.  jpost.gp computes by cancelling the negative terms in the mapping using a second matrix.  I have only calculated  for base "e" and not for any other real valued bases or complex bases.    
As far as I can tell, the jpost.gp algorithm is probably an order of magnitude faster than fatou.gp for computing very high precision results.  The only downside is that matrices needs a lot more memory, so I haven't computed anything higher than 313 decimal digits accurate so far, which needed 256meg of pari-gp stack space.  This compute bound vs memory bound matrix stuff is new to me... Jay's post involved a "4096x4096 system with 7168 bits of floating point precision." implemented in sage.  How large a stack can you have in pari-gp?

So far, for my jpost.gp program I started by focusing on extensions of jslog that require the minimum amount of extra computation time than initializing jslog.  But what if you relax that constraint, and make the  matrix mappings arbitrarily large.  Then I observe that the "stitching" error in post#20 starts to fade away.  For a fixed jslog, with an arbitrarily large number of terms in , do the slogthtr and slogthts functions in some sense converge towards each other?  edit: question.  In general, the precision of the resulting jslogtht is limited by the error term discontinuity in the jslog, where the theta mappings are computed.  Can we approximate this as the number of terms in jtaylor gets arbitrarily large?  z0rgt is the somewhat arbitrary term I've been using.  z0rgt=0.9455+0.4824i; errterm~= jslog(z0rgt)-jslog(ln(z0rgt))-1
- Sheldon
Reply
#23
I've just come across this thread and took a time for it - which I couldn't take when the thread has been young & fresh in 2019.    

I'd never understood, what it means "a taylorseries at 2 fixpoints" - from Sheldon's post I now get, that it was just meant to subtract from the powerseries of the slog() the real part of the series of  (log(1-L)/L + log(1-L*)/L*) .  That's all...
So, ok, now I've got it and could reproduce the related powerseries, and could then look at the coefficients of the residual powerseries for some recognizable characteristics.     

After this I asked myself: if I reduce the powerseries of the slog() by the real part of the *primary* fixpoint, what if I reduce by the equivalent series for the *second* fixpoint as well, and the *third*, and, of course, the sum of *all* fixpoints?              

We have this nice Pari/GP LambertW() function by Mike3, which also accepts the branch-index as parameters, and I produced the residual of the slog() powerseries after removing the sum of real parts of all (well: the first N=2048 ... ) powerseries 2*\ln(x+L_k)/L_k , accumulated to the powerseries SU_N(x)   by use of the first N fixpoints.   

I think it results in a nice residual series res_N(x)=slog(x)-SU_N(x), see my first 60 coefficients below.

-  Note, that the limits of the coefficients of SU_N(x) seem to approximate simple rational numbers (but I couldn't detect their construction-rule so far).

-  The last column in the table is a logarithm-like rescaling of the res(x)-coefficients by means of the asinh()-function attempting to mark a constant (here by example 1.64) which should be the range of convergence as far as the resulting rescaled coefficients are bounded within a certain small interval.

PHP Code:
                                
              A1=slog(x          A2=SU_N(x      A3=res(x)=slog(x)-SU_N(x  A4=asinh(0.5*res(1.64 x))  \\for exact expression of A4 see below
-----------------------------------------------------------------------------------------------------------------------
x^0*                    0            -6.01362838850              6.01362838850    0.0830489795521
x
^1*        0.915946056500            0.999975272452          -0.0840292159527   -2.00036627431
x
^2*        0.249354598672            0.249999999999        -0.000645401327096   -6.35624875167
...        -0.110464759796           -0.111111111111         0.000646351314663    5.86008677396
          
-0.0939362550999          -0.0937500000000        -0.000186255099859   -6.60961016732
           0.0100032332932           0.0100000000000       0.00000323329323156    10.1685281561
           0.0358979215945           0.0358796296296        0.0000182919649135    7.94087134595
          0.00657340109961          0.00657596371882      
-0.00000256261921579   -9.41160700518
          
-0.0123068595182          -0.0123046875000      -0.00000217201818439   -9.08228386197
         
-0.00638980256916         -0.00639023613561      0.000000433566449037    10.1989545924
          0.00327358982282          0.00327323082011      0.000000359002711437    9.89297347993
          0.00376920295283          0.00376926734881    
-0.0000000643959845224   -11.1165558979
        
-0.000280217019537        -0.000280141200642    -0.0000000758188951953   -10.4585633972
         
-0.00177510655720         -0.00177511385911    0.00000000730191846398    12.3040775762
        
-0.000427969957525        -0.000427988270449     0.0000000183129248088    10.8899113673
         0.000679723261244         0.000679722859771   0.000000000401472977297    14.2154372616
         0.000412792618166         0.000412797297023   
-0.00000000467885679836   -11.2650721615
        
-0.000186597783775        -0.000186597001042  -0.000000000782733541762   -12.5583926709
        
-0.000253549198417        -0.000253550392217    0.00000000119380072367    11.6415913810
       0.00000747432922309       0.00000747390655756   0.000000000422665522288    12.1852113823
         0.000123166907930         0.000123167193596  
-0.000000000285665608851   -12.0822743523
        0.0000359226636883        0.0000359228452628        
-1.81574577162E-10   -12.0407335746
       
-0.0000477147691069       -0.0000477148257309         5.66239713926E-11    12.7112713772
       
-0.0000327288948796       -0.0000327289645648         6.96852050893E-11    12.0090195241
        0.0000125870328510        0.0000125870377672        
-4.91620353562E-12   -14.1657747183
        0.0000200057062797        0.0000200057307739        
-2.44941717121E-11   -12.0651798701
      0.000000328421886987      0.000000328425308225        
-3.42123869912E-12   -13.5389161500
      
-0.00000969753198878      -0.00000969753980480         7.81601612587E-12    12.2180476085
      
-0.00000331044768235      -0.00000331045059455         2.91220724068E-12    12.7106150489
       0.00000370224855088       0.00000370225071381        
-2.16293012462E-12   -12.5133662611
       0.00000284048701230       0.00000284048856000        
-1.54770233960E-12   -12.3533623913
     
-0.000000907373128721     -0.000000907373571659         4.42938343139E-13    13.1097623179
      
-0.00000170542583695      -0.00000170542651692         6.79973549150E-13    12.1864427571
     
-0.000000105766930521     -0.000000105766923618        -6.90332855510E-15   -16.2817967197
      0.000000814959687357      0.000000814959947939        
-2.60582712173E-13   -12.1561838486
      0.000000319378912383      0.000000319378972565        
-6.01823151704E-14   -13.1270293894
     
-0.000000302803952745     -0.000000302804039175         8.64299142248E-14    12.2703778435
     
-0.000000259113609213     -0.000000259113654049         4.48362263787E-14    12.4319990098
     0.0000000668680950706     0.0000000668681180191        
-2.29484578713E-14   -12.6070684661
      0.000000152047985378      0.000000152048008960        
-2.35822623706E-14   -12.0851281284
     0.0000000163010167610     0.0000000163010135384         3.22266788681E-15    13.5807171687
    
-0.0000000711928410564    -0.0000000711928513879         1.03314351218E-14    11.9210392786
    
-0.0000000316203136559    -0.0000000316203150032         1.34721633503E-15    13.4634937482
     0.0000000255268411676     0.0000000255268450390        
-3.87137271321E-15   -11.9132288456
     0.0000000243757334112     0.0000000243757349738        
-1.56262822101E-15   -12.3257725934
   
-0.00000000488385551816   -0.00000000488385670824         1.19008703699E-15    12.1034190675
    
-0.0000000139492357905    -0.0000000139492367539         9.63479126480E-16    11.8199537261
   
-0.00000000212263049456   -0.00000000212263025320        -2.41354209894E-16   -12.7095427025
    0.00000000637533532526    0.00000000637533579420        
-4.68945196339E-16   -11.5506261564
    0.00000000318068006715    0.00000000318068009120        
-2.40456264779E-17   -14.0264626951
   
-0.00000000218960125797   -0.00000000218960145023         1.92260232881E-16    11.4528697483
   
-0.00000000234030729101   -0.00000000234030735336         6.23544593291E-17    12.0841881498
   0.000000000341361331440   0.000000000341361396652        
-6.52121328220E-17   -11.5446815607
    0.00000000130469052711    0.00000000130469057172        
-4.46064141569E-17   -11.4297531928
   0.000000000257242243110   0.000000000257242227261         1.58491746740E-17    11.9698171869
  
-0.000000000580366971924  -0.000000000580366995593         2.36690532344E-17    11.0740699490
  
-0.000000000323237411750  -0.000000000323237411424        -3.26330860817E-19   -14.8633856329
         1.89449577736E-10         1.89449588140E-10        
-1.04038597499E-17   -10.9066690225
         2.27877205677E-10         2.27877208438E-10        
-2.76018307444E-18   -11.7388526386
        
-2.14327446696E-11        -2.14327484604E-11         3.79079075152E-18    10.9268787670 

update: I've just found, that the powerseries of SU_N(x) for N \to \infty seems to converge to that of (a slight shift of) H(x)=log(1-x*exp(-x))/x .
The first coefficient in SU_N() however possibly diverges to infinity or converges to some finite value with increasing N and the observation of the second coefficient is somehow inconclusive, however I'd guess, that it converges to 1.  
It should be mentioned, that I've found this relationship via lookup of the integer approximations (achieved by some standard trial&error rescalings of the SU_N()-coefficients by factorials and reciprocals) in the OEIS finding http://oeis.org/A009306 
So instead of computing finite sums SU_N(x) we can simply use the coefficients of H(x) where we use h_k*k/(k-1) (for k>1).
H(x) is here the powerseries of the sum of all  2*real(log(x-L_k)/L_k) for k=0 to infinity and perhaps a better/more perfect candidate for the reduction of the slog()-power-series.

update 2: I've just put a question focused at the coincidence of SU_N(x) and H(x) at stackexchange: https://math.stackexchange.com/questions...oefficient

update 3: a much better/more coherent/more focused presentation of this is now in already mentioned the stackexchange-post: https://math.stackexchange.com/questions...oefficient
end-update

I've not much more to say at the moment, but it is perhaps an additional, and perhaps even fruitful observation. 

- - -
Here the Pari/GP-code:
PHP Code:
\ps 60   \\ powerseries expansion up to 60 terms
N
=2048
SU_N
=O(x^60);
 for(
k=0,N
 
     L_Kexp(-LambertW(-1,k));   \\ branch-enabled version LambertW() by Mike3
      SU_N
+= 2*real(log(x-L_K)/L_K);
 
 );

\\ --------------------------------------------------------------------------

A1=VE(JF700,60         \\ first 60 coefficients of Jay^s 700x700-solution
A2
=polcoeffs(SU_N60)~  \\ first 60 coefficients of the accumulated fixpoints-related series     
A3
=A1-A2                 \\ the residual
A4
=asinh(0.5*VR(dV(1.64,60)*A3)) \\ dV(1.64)*A3 rescales A3^s coefficients by consecutive powers of 1.64
                                 
\\ VR reciproces the entries of its argument-vektor,
 
                                \\ asinh(0.5* ) provides an approximate -but signedlogarithmic scaling for values >10 or >100
\\ 'update
dim=60
A2_a= polcoeffs(log(1-x*exp(-x))/x,dim)~
A2  = vectorv(dim,r,A2_a[r]*if(r==1,1,r/(r-1)))
A3  = vectorv(dim,r,A1[r]-A2[r]) \\ the better residual
\\ end update' 

The slog-coefficients were taken from Jay's 700x700 matrix solution posted in this thread.
Gottfried Helms, Kassel
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Terse Schroeder & Abel function code Daniel 1 434 10/16/2022, 07:03 AM
Last Post: Daniel
  Quickest way to compute the Abel function on the Shell-Thron boundary JmsNxn 0 728 04/21/2022, 01:52 AM
Last Post: JmsNxn
  The Promised Matrix Add On; Abel_M.gp JmsNxn 2 2,108 08/21/2021, 03:18 AM
Last Post: JmsNxn
  An incremental method to compute (Abel) matrix inverses bo198214 3 14,871 07/20/2010, 12:13 PM
Last Post: Gottfried
  A note on computation of the slog Gottfried 6 17,841 07/12/2010, 10:24 AM
Last Post: Gottfried
  Improving convergence of Andrew's slog jaydfox 19 47,892 07/02/2010, 06:59 AM
Last Post: bo198214
  intuitive slog base sqrt(2) developed between 2 and 4 bo198214 1 6,747 09/10/2009, 06:47 PM
Last Post: bo198214
  SAGE code for computing flow matrix for exp(z)-1 jaydfox 4 15,430 08/21/2009, 05:32 PM
Last Post: jaydfox
  sexp and slog at a microcalculator Kouznetsov 0 5,591 01/08/2009, 08:51 AM
Last Post: Kouznetsov
  Convergence of matrix solution for base e jaydfox 6 16,625 12/18/2007, 12:14 AM
Last Post: jaydfox



Users browsing this thread: 1 Guest(s)