02/08/2019, 11:42 PM

(01/30/2019, 05:27 PM)jaydfox Wrote: I've been trying to figure out on my own how to eliminate those coefficients for terms with negative exponents, and my progress has been frustratingly slow. I was re-reading your post more carefully, and I see you already have found a way. I will definitely take the time to understand this, because it seems to be key to solving the Riemann mapping numerically. ...

I'm also playing around with my FFT library, trying to figure out how to cancel out negative terms. I had made some progress in understanding the problem, but hadn't yet made progress in canceling the terms. I'm going to go through your post one more time, and see if I can figure it out.

It's been a while, so I wanted to post an update on my progress.

It's actually rather embarrassing. I had been struggling to find a way to calculate the required Fourier series ("theta" function as Sheldon calls it), so that I could eliminate the terms with the negative exponents in the Riemann mapping of my slog. (I actually struggled with this a few years ago as well, before I lost all my code.)

I had the ah-ha! moment a few days ago. And this is the embarrassing part. Because it's the same problem I originally ran into, when trying to calculate a Taylor series for the super-exponential function (sexp). The problem is non-linear, such that increasing the system size becomes more and more difficult to solve numerically, even with an iterative solver. The simple solution to that problem was to solve the inverse of the sexp function, as Andrew Robbins demonstrated. The solution to the slog is a linear problem, meaning it can be solved with linear algebra, i.e., matrix math.

I have been trying to solve the problem of directly calculating the coefficients of the Riemann mapping, which maps the unit disk to the Kneser region bounded by the transform of the real line. This is a non-linear problem, because the theta function has to be applied to the input of the jsexp function.

The solution is to solve the inverse operation, mapping the regular slog to the jslog. In this situation, the theta function is applied to the output of the jslog, which is linear. Hence, eliminating the coefficients from this mapping is a problem suited to matrix math. When Sheldon was describing the solution, I was still thinking in the wrong space, which is why I wasn't able to understand. I was still working on the Kneser space, i.e., exp(2*pi*i*jslog(z)). I should have been working with jslog(z)-rslog(z), or some similar form.

That was my first ah-ha moment. But the biggest surprise for me was when I fully understood the effect of solving for the complex conjugate of the negative terms. Given a real base like base e, the conjugation step is a trivial solution to the rslog of the lower fixed point. But for a non-real base, there will be two different regular slogs to consider, at the upper and lower fixed points. I would need to map the jslog for both of those rslogs, and then cancel out the negative terms in both of them simultaneously. I have to admit that I was extremely amazed and excited when that clicked. I've been wondering for years how Sheldon managed to combine the upper and lower rslog functions to derive his solution. It didn't seem to make sense to me before. Now I understand!

I could have "cheated" and looked at Sheldon's code, but it was far more fulfilling (and frustrating) to work it out on my own. I'm still developing my code, and I haven't run analysis on the rate of convergence yet. I'm curious to compare my results to Sheldon's, to see if we're using the same algorithms or not. I'll post results in the next few days. But I did want to share my excitement that I've made significant progress.

~ Jay Daniel Fox