• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 Revisting my accelerated slog solution using Abel matrix inversion sheldonison Long Time Fellow Posts: 641 Threads: 22 Joined: Oct 2008 02/09/2019, 02:25 PM (This post was last modified: 02/10/2019, 12:18 AM by sheldonison.) Quote:... the Riemann mapping ... is a non-linear problem, because the theta function has to be applied to the input of the jsexp function. ...The solution is to solve the inverse operation, mapping the regular slog to the jslog.  In this situation, the theta function is applied to the output of the jslog, which is linear.  Hence ... suited to matrix math... That was my first ah-ha moment.  But the biggest surprise for me was when I fully understood the effect of solving for the complex conjugate of the negative terms.  Given a real base like base e, the conjugation step is a trivial solution to the rslog of the lower fixed point... I've been wondering for years how Sheldon managed to combine the upper and lower rslog functions to derive his solution...   I could have "cheated" and looked at Sheldon's code, but it was far more fulfilling (and frustrating) to work it out on my own...   Just some quick clarifications on "Sheldon's" codes.   There are many of them.   kneser.gp and tetcomplex.gp actually do compute the 1-cyclic non-linear theta/Riemann mapping iteratively.  Some of the internal code is really ugly though.  I don't use it anymore ... fatou.gp computes Kneser's slog, $\text{slogk}(z)=\alpha(z)+\theta_{S}\left(\alpha(z)\right)$, which takes advantage of Jay's accelerated technique but it does not use Jay's slog or Andrew's slog.  I think Jay would call my complex valued Abel function $\alpha(z)$ the rslog.  fatou.gp is a completely different iterative algorithm.  fatou.gp works for a very large range of real and complex bases (using both fixed point's schroder functions), and has a default precision of \p 38 which gives ~35 decimal digit accurate results, and initializes for base "e" in 0.35 seconds.  For \p 77, the time increases to ~5 seconds.  Because this is a linear problem, I eventually also got a matrix implementation of fatou.gp working; but the iterative technique has advantages in that the code can figure out the required parameters as it iterates. jpost.gp in post#20, which was developed entirely during the lifetime of this thread, using the Schroder and fixed point functions from fatou.gp.  jpost.gp computes $\text{kslog}(z)\approx\text{jslog}(z)+\theta_{R}\left(\text{jslog}(z)\right)\approx\alpha(z)+\theta_{S}\left(\alpha(z)\right)$ by cancelling the negative terms in the $\theta_S$ mapping using a second matrix.  I have only calculated $\theta_{R}$ for base "e" and not for any other real valued bases or complex bases.     As far as I can tell, the jpost.gp algorithm is probably an order of magnitude faster than fatou.gp for computing very high precision results.  The only downside is that matrices needs a lot more memory, so I haven't computed anything higher than 313 decimal digits accurate so far, which needed 256meg of pari-gp stack space.  This compute bound vs memory bound matrix stuff is new to me... Jay's post involved a "4096x4096 system with 7168 bits of floating point precision." implemented in sage.  How large a stack can you have in pari-gp? So far, for my jpost.gp program I started by focusing on extensions of jslog that require the minimum amount of extra computation time than initializing jslog.  But what if you relax that constraint, and make the $\theta_{Rj};\;\theta_{Sj}$ matrix mappings arbitrarily large.  Then I observe that the "stitching" error in post#20 starts to fade away.  For a fixed jslog, with an arbitrarily large number of terms in $\theta_{Rj};\;\theta_{Sj}$, do the slogthtr and slogthts functions in some sense converge towards each other?  edit: question.  In general, the precision of the resulting jslogtht is limited by the error term discontinuity in the jslog, where the theta mappings are computed.  Can we approximate this as the number of terms in jtaylor gets arbitrarily large?  z0rgt is the somewhat arbitrary term I've been using.  z0rgt=0.9455+0.4824i; errterm~= jslog(z0rgt)-jslog(ln(z0rgt))-1 - Sheldon « Next Oldest | Next Newest »

 Messages In This Thread Revisting my accelerated slog solution using Abel matrix inversion - by jaydfox - 01/06/2019, 12:54 AM RE: Revisting my accelerated slog solution using Abel matrix inversion - by Gottfried - 01/06/2019, 11:58 AM RE: Revisting my accelerated slog solution using Abel matrix inversion - by bo198214 - 01/06/2019, 10:09 PM RE: Revisting my accelerated slog solution using Abel matrix inversion - by jaydfox - 01/07/2019, 01:38 AM RE: Revisting my accelerated slog solution using Abel matrix inversion - by jaydfox - 01/07/2019, 01:40 AM RE: Revisting my accelerated slog solution using Abel matrix inversion - by sheldonison - 01/07/2019, 05:14 AM RE: Revisting my accelerated slog solution using Abel matrix inversion - by sheldonison - 01/08/2019, 06:14 PM RE: Revisting my accelerated slog solution using Abel matrix inversion - by jaydfox - 01/08/2019, 07:04 PM Analysis of Jay's slog vs Kneser - by sheldonison - 01/17/2019, 06:44 PM RE: Analysis of Jay's slog vs Kneser - by jaydfox - 01/18/2019, 06:35 AM RE: Analysis of Jay's slog vs Kneser - by jaydfox - 01/18/2019, 06:42 AM RE: Analysis of Jay's slog vs Kneser - by sheldonison - 01/18/2019, 06:50 PM RE: Analysis of Jay's slog vs Kneser - by jaydfox - 01/18/2019, 06:17 PM RE: Revisting my accelerated slog solution using Abel matrix inversion - by sheldonison - 01/19/2019, 02:39 PM RE: Revisting my accelerated slog solution using Abel matrix inversion - by sheldonison - 01/23/2019, 09:44 PM RE: Revisting my accelerated slog solution using Abel matrix inversion - by jaydfox - 01/23/2019, 11:48 PM RE: Revisting my accelerated slog solution using Abel matrix inversion - by sheldonison - 01/27/2019, 12:42 AM RE: Revisting my accelerated slog solution using Abel matrix inversion - by jaydfox - 01/30/2019, 05:27 PM RE: Revisting my accelerated slog solution using Abel matrix inversion - by jaydfox - 02/08/2019, 11:42 PM RE: Revisting my accelerated slog solution using Abel matrix inversion - by sheldonison - 01/29/2019, 09:15 PM RE: Revisting my accelerated slog solution using Abel matrix inversion - by sheldonison - 02/01/2019, 10:30 AM RE: Revisting my accelerated slog solution using Abel matrix inversion - by sheldonison - 02/09/2019, 02:25 PM

 Possibly Related Threads... Thread Author Replies Views Last Post An incremental method to compute (Abel) matrix inverses bo198214 3 10,035 07/20/2010, 12:13 PM Last Post: Gottfried A note on computation of the slog Gottfried 6 11,014 07/12/2010, 10:24 AM Last Post: Gottfried Improving convergence of Andrew's slog jaydfox 19 29,617 07/02/2010, 06:59 AM Last Post: bo198214 intuitive slog base sqrt(2) developed between 2 and 4 bo198214 1 4,042 09/10/2009, 06:47 PM Last Post: bo198214 SAGE code for computing flow matrix for exp(z)-1 jaydfox 4 9,404 08/21/2009, 05:32 PM Last Post: jaydfox sexp and slog at a microcalculator Kouznetsov 0 3,483 01/08/2009, 08:51 AM Last Post: Kouznetsov Convergence of matrix solution for base e jaydfox 6 10,111 12/18/2007, 12:14 AM Last Post: jaydfox Computing Abel function at a given center jaydfox 10 14,009 11/30/2007, 06:44 PM Last Post: andydude Matrix-method: compare use of different fixpoints Gottfried 23 27,745 11/30/2007, 05:24 PM Last Post: andydude SAGE code implementing slog with acceleration jaydfox 4 8,006 10/22/2007, 12:59 AM Last Post: jaydfox

Users browsing this thread: 1 Guest(s)