Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Revisting my accelerated slog solution using Abel matrix inversion
#21
(01/30/2019, 05:27 PM)jaydfox Wrote: I've been trying to figure out on my own how to eliminate those coefficients for terms with negative exponents, and my progress has been frustratingly slow.  I was re-reading your post more carefully, and I see you already have found a way.  I will definitely take the time to understand this, because it seems to be key to solving the Riemann mapping numerically.  ...

I'm also playing around with my FFT library, trying to figure out how to cancel out negative terms.  I had made some progress in understanding the problem, but hadn't yet made progress in canceling the terms.  I'm going to go through your post one more time, and see if I can figure it out.

It's been a while, so I wanted to post an update on my progress.

It's actually rather embarrassing.  I had been struggling to find a way to calculate the required Fourier series ("theta" function as Sheldon calls it), so that I could eliminate the terms with the negative exponents in the Riemann mapping of my slog.  (I actually struggled with this a few years ago as well, before I lost all my code.)

I had the ah-ha! moment a few days ago.  And this is the embarrassing part.  Because it's the same problem I originally ran into, when trying to calculate a Taylor series for the super-exponential function (sexp).  The problem is non-linear, such that increasing the system size becomes more and more difficult to solve numerically, even with an iterative solver.  The simple solution to that problem was to solve the inverse of the sexp function, as Andrew Robbins demonstrated.  The solution to the slog is a linear problem, meaning it can be solved with linear algebra, i.e., matrix math.

I have been trying to solve the problem of directly calculating the coefficients of the Riemann mapping, which maps the unit disk to the Kneser region bounded by the transform of the real line.  This is a non-linear problem, because the theta function has to be applied to the input of the jsexp function.

The solution is to solve the inverse operation, mapping the regular slog to the jslog.  In this situation, the theta function is applied to the output of the jslog, which is linear.  Hence, eliminating the coefficients from this mapping is a problem suited to matrix math.  When Sheldon was describing the solution, I was still thinking in the wrong space, which is why I wasn't able to understand.  I was still working on the Kneser space, i.e., exp(2*pi*i*jslog(z)).  I should have been working with jslog(z)-rslog(z), or some similar form.

That was my first ah-ha moment.  But the biggest surprise for me was when I fully understood the effect of solving for the complex conjugate of the negative terms.  Given a real base like base e, the conjugation step is a trivial solution to the rslog of the lower fixed point.  But for a non-real base, there will be two different regular slogs to consider, at the upper and lower fixed points.  I would need to map the jslog for both of those rslogs, and then cancel out the negative terms in both of them simultaneously.  I have to admit that I was extremely amazed and excited when that clicked.  I've been wondering for years how Sheldon managed to combine the upper and lower rslog functions to derive his solution.  It didn't seem to make sense to me before.  Now I understand!

I could have "cheated" and looked at Sheldon's code, but it was far more fulfilling (and frustrating) to work it out on my own.  I'm still developing my code, and I haven't run analysis on the rate of convergence yet.  I'm curious to compare my results to Sheldon's, to see if we're using the same algorithms or not.  I'll post results in the next few days.  But I did want to share my excitement that I've made significant progress.
~ Jay Daniel Fox
Reply
#22
Quote:... the Riemann mapping ... is a non-linear problem, because the theta function has to be applied to the input of the jsexp function.

...The solution is to solve the inverse operation, mapping the regular slog to the jslog.  In this situation, the theta function is applied to the output of the jslog, which is linear.  Hence ... suited to matrix math...

That was my first ah-ha moment.  But the biggest surprise for me was when I fully understood the effect of solving for the complex conjugate of the negative terms.  Given a real base like base e, the conjugation step is a trivial solution to the rslog of the lower fixed point... I've been wondering for years how Sheldon managed to combine the upper and lower rslog functions to derive his solution...  

I could have "cheated" and looked at Sheldon's code, but it was far more fulfilling (and frustrating) to work it out on my own...  

Just some quick clarifications on "Sheldon's" codes.   There are many of them.  
  • kneser.gp and tetcomplex.gp actually do compute the 1-cyclic non-linear theta/Riemann mapping iteratively.  Some of the internal code is really ugly though.  I don't use it anymore ...
  • fatou.gp computes Kneser's slog, , which takes advantage of Jay's accelerated technique but it does not use Jay's slog or Andrew's slog.  I think Jay would call my complex valued Abel function the rslog.  fatou.gp is a completely different iterative algorithm.  fatou.gp works for a very large range of real and complex bases (using both fixed point's schroder functions), and has a default precision of \p 38 which gives ~35 decimal digit accurate results, and initializes for base "e" in 0.35 seconds.  For \p 77, the time increases to ~5 seconds.  Because this is a linear problem, I eventually also got a matrix implementation of fatou.gp working; but the iterative technique has advantages in that the code can figure out the required parameters as it iterates.
  • jpost.gp in post#20, which was developed entirely during the lifetime of this thread, using the Schroder and fixed point functions from fatou.gp.  jpost.gp computes by cancelling the negative terms in the mapping using a second matrix.  I have only calculated  for base "e" and not for any other real valued bases or complex bases.    
As far as I can tell, the jpost.gp algorithm is probably an order of magnitude faster than fatou.gp for computing very high precision results.  The only downside is that matrices needs a lot more memory, so I haven't computed anything higher than 313 decimal digits accurate so far, which needed 256meg of pari-gp stack space.  This compute bound vs memory bound matrix stuff is new to me... Jay's post involved a "4096x4096 system with 7168 bits of floating point precision." implemented in sage.  How large a stack can you have in pari-gp?

So far, for my jpost.gp program I started by focusing on extensions of jslog that require the minimum amount of extra computation time than initializing jslog.  But what if you relax that constraint, and make the  matrix mappings arbitrarily large.  Then I observe that the "stitching" error in post#20 starts to fade away.  For a fixed jslog, with an arbitrarily large number of terms in , do the slogthtr and slogthts functions in some sense converge towards each other?  edit: question.  In general, the precision of the resulting jslogtht is limited by the error term discontinuity in the jslog, where the theta mappings are computed.  Can we approximate this as the number of terms in jtaylor gets arbitrarily large?  z0rgt is the somewhat arbitrary term I've been using.  z0rgt=0.9455+0.4824i; errterm~= jslog(z0rgt)-jslog(ln(z0rgt))-1
- Sheldon
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  An incremental method to compute (Abel) matrix inverses bo198214 3 8,497 07/20/2010, 12:13 PM
Last Post: Gottfried
  A note on computation of the slog Gottfried 6 8,757 07/12/2010, 10:24 AM
Last Post: Gottfried
  Improving convergence of Andrew's slog jaydfox 19 23,167 07/02/2010, 06:59 AM
Last Post: bo198214
  intuitive slog base sqrt(2) developed between 2 and 4 bo198214 1 3,259 09/10/2009, 06:47 PM
Last Post: bo198214
  SAGE code for computing flow matrix for exp(z)-1 jaydfox 4 7,505 08/21/2009, 05:32 PM
Last Post: jaydfox
  sexp and slog at a microcalculator Kouznetsov 0 2,887 01/08/2009, 08:51 AM
Last Post: Kouznetsov
  Convergence of matrix solution for base e jaydfox 6 8,024 12/18/2007, 12:14 AM
Last Post: jaydfox
  Computing Abel function at a given center jaydfox 10 10,936 11/30/2007, 06:44 PM
Last Post: andydude
  Matrix-method: compare use of different fixpoints Gottfried 23 21,517 11/30/2007, 05:24 PM
Last Post: andydude
  SAGE code implementing slog with acceleration jaydfox 4 6,498 10/22/2007, 12:59 AM
Last Post: jaydfox



Users browsing this thread: 1 Guest(s)