08/12/2010, 04:43 AM
(This post was last modified: 08/12/2010, 03:38 PM by sheldonison.)

(08/11/2010, 06:11 PM)sheldonison Wrote: ... the sexp(z) is about 100-200x more accurate than the RiemannCircle function from which it is generated.Well, actually, I have some theories, which were going through my head before I got caught up in implementing and debugging the pari-gp code. The high frequency terms in the RiemannCircle function only effect points closest to the real axis due to exponential decay as imag(z) increases.

I don't really know why this is so.

In fact, exponential decay, means that all terms a_n with n>=1 should improve in this loop:

Riemann(z) -> sexp(z) -> Riemann(z).

The real error term in a_0 gets cancelled out by the recentering. The imaginary error term in a_0 gets cancelled out by the Schwarz refelection.

The real error term in a_1 is not a factor for terms near the imag(z)=1 part of the circle (e^-2Pi is ~= 0.002), so the error term gets factored down by about 1/10th, if you do the average over 100 points. The imaginary error term probably cancels in the Schwarz reflection. So that's a factor of 1/20th. The error term improvement for higher frequency coefficients gets exponentially better, with each term's error contribution impoving twice as much as the term before it.

So there is some hope that a mathematical argument for congruence that can be made, with some effort taken to describe how the coefficients of the RiemannCircle function converge from one iteration to the next. Also, I could graph the convergence of the various terms in the Taylor series for the RiemannCircle function, and verify that most of the error is in the lowest frequency term, unless the computation limits cause chaotic results (which they sometimes do as noted in my earlier post; I should recreate and analyze the chaotic case for base 10 sexp(z) extended precision with sexp(z) series at 150 terms. I also found a setting that causes chaos for base e.)

update, it occurs to me that this is a general principle. Lets say g(z) is a function with real values at the real axis, and f(z) is an approximation, over a unit length, such that f(0)=g(0) and f(1)=g(1). Then there is some 1-cyclic theta fourier series function defining f(z) over the entire real axis.

Now, theta(z) is probably not even an analytic function. But we could wrap the real valued theta(z) around the unit circle. It would have a laurent series. We could throw out all of the terms in the laurent series with a_n*z^-n. Now theta_2(z) is still 1-cyclic, but it is complex valued at the real axis.

You might take this new f_2 function, and use it to generate another function, f_3. Generate the Taylor series over a half circle of f_2(z), using the complex conjugate for the other half of the circle where imag(z)<0. At this point, you should see the connection between the sequence of functions, f(z), f_2(z), f_3(z), and the iterative algorithm for the sexp(z) function I have described, where the sequence of f, f_1, f_2 ... converge to the desired g(z) function.

- Sheldon