Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Revisting my accelerated slog solution using Abel matrix inversion
#22
Quote:... the Riemann mapping ... is a non-linear problem, because the theta function has to be applied to the input of the jsexp function.

...The solution is to solve the inverse operation, mapping the regular slog to the jslog.  In this situation, the theta function is applied to the output of the jslog, which is linear.  Hence ... suited to matrix math...

That was my first ah-ha moment.  But the biggest surprise for me was when I fully understood the effect of solving for the complex conjugate of the negative terms.  Given a real base like base e, the conjugation step is a trivial solution to the rslog of the lower fixed point... I've been wondering for years how Sheldon managed to combine the upper and lower rslog functions to derive his solution...  

I could have "cheated" and looked at Sheldon's code, but it was far more fulfilling (and frustrating) to work it out on my own...  

Just some quick clarifications on "Sheldon's" codes.   There are many of them.  
  • kneser.gp and tetcomplex.gp actually do compute the 1-cyclic non-linear theta/Riemann mapping iteratively.  Some of the internal code is really ugly though.  I don't use it anymore ...
  • fatou.gp computes Kneser's slog, , which takes advantage of Jay's accelerated technique but it does not use Jay's slog or Andrew's slog.  I think Jay would call my complex valued Abel function the rslog.  fatou.gp is a completely different iterative algorithm.  fatou.gp works for a very large range of real and complex bases (using both fixed point's schroder functions), and has a default precision of \p 38 which gives ~35 decimal digit accurate results, and initializes for base "e" in 0.35 seconds.  For \p 77, the time increases to ~5 seconds.  Because this is a linear problem, I eventually also got a matrix implementation of fatou.gp working; but the iterative technique has advantages in that the code can figure out the required parameters as it iterates.
  • jpost.gp in post#20, which was developed entirely during the lifetime of this thread, using the Schroder and fixed point functions from fatou.gp.  jpost.gp computes by cancelling the negative terms in the mapping using a second matrix.  I have only calculated  for base "e" and not for any other real valued bases or complex bases.    
As far as I can tell, the jpost.gp algorithm is probably an order of magnitude faster than fatou.gp for computing very high precision results.  The only downside is that matrices needs a lot more memory, so I haven't computed anything higher than 313 decimal digits accurate so far, which needed 256meg of pari-gp stack space.  This compute bound vs memory bound matrix stuff is new to me... Jay's post involved a "4096x4096 system with 7168 bits of floating point precision." implemented in sage.  How large a stack can you have in pari-gp?

So far, for my jpost.gp program I started by focusing on extensions of jslog that require the minimum amount of extra computation time than initializing jslog.  But what if you relax that constraint, and make the  matrix mappings arbitrarily large.  Then I observe that the "stitching" error in post#20 starts to fade away.  For a fixed jslog, with an arbitrarily large number of terms in , do the slogthtr and slogthts functions in some sense converge towards each other?  edit: question.  In general, the precision of the resulting jslogtht is limited by the error term discontinuity in the jslog, where the theta mappings are computed.  Can we approximate this as the number of terms in jtaylor gets arbitrarily large?  z0rgt is the somewhat arbitrary term I've been using.  z0rgt=0.9455+0.4824i; errterm~= jslog(z0rgt)-jslog(ln(z0rgt))-1
- Sheldon
Reply


Messages In This Thread
Analysis of Jay's slog vs Kneser - by sheldonison - 01/17/2019, 06:44 PM
RE: Analysis of Jay's slog vs Kneser - by jaydfox - 01/18/2019, 06:35 AM
RE: Analysis of Jay's slog vs Kneser - by jaydfox - 01/18/2019, 06:42 AM
RE: Analysis of Jay's slog vs Kneser - by jaydfox - 01/18/2019, 06:17 PM
RE: Revisting my accelerated slog solution using Abel matrix inversion - by sheldonison - 02/09/2019, 02:25 PM

Possibly Related Threads...
Thread Author Replies Views Last Post
  An incremental method to compute (Abel) matrix inverses bo198214 3 8,990 07/20/2010, 12:13 PM
Last Post: Gottfried
  A note on computation of the slog Gottfried 6 9,490 07/12/2010, 10:24 AM
Last Post: Gottfried
  Improving convergence of Andrew's slog jaydfox 19 25,304 07/02/2010, 06:59 AM
Last Post: bo198214
  intuitive slog base sqrt(2) developed between 2 and 4 bo198214 1 3,511 09/10/2009, 06:47 PM
Last Post: bo198214
  SAGE code for computing flow matrix for exp(z)-1 jaydfox 4 8,090 08/21/2009, 05:32 PM
Last Post: jaydfox
  sexp and slog at a microcalculator Kouznetsov 0 3,086 01/08/2009, 08:51 AM
Last Post: Kouznetsov
  Convergence of matrix solution for base e jaydfox 6 8,655 12/18/2007, 12:14 AM
Last Post: jaydfox
  Computing Abel function at a given center jaydfox 10 11,958 11/30/2007, 06:44 PM
Last Post: andydude
  Matrix-method: compare use of different fixpoints Gottfried 23 23,482 11/30/2007, 05:24 PM
Last Post: andydude
  SAGE code implementing slog with acceleration jaydfox 4 6,986 10/22/2007, 12:59 AM
Last Post: jaydfox



Users browsing this thread: 1 Guest(s)