01/06/2019, 12:54 AM

Happy New Year!

A few weeks ago, I started getting interested in revisiting my accelerated slog solution. The basics of my method are discussed in this thread:

Improving convergence of Andrew's slog

A note on that thread. The inverse Abel matrix approach was not discovered by me. I don't know for sure who first used and published the method, but I'm reasonably certain that my first introduction to the method was through Andrew Robbins, so I give him the credit. As far as I know, however, I was the first to try to accelerate convergence with the method described in the linked thread. Indeed, I don't think I've seen it described anywhere else. If someone knows of another person using this method, please let me know!

Anyway, I bring this all up, because I am once again playing with this solution. I've modified my matrix solver to use Gaussian elimination in order to perform an LU decomposition. Due to memory constraints, I don't actually save the L matrix, and I've modified the code to swap rows of the U matrix to disk. This has allowed me to solve larger and larger systems, without RAM becoming a bottleneck. The downside is a 20%-40% reduction in speed. (I'm fortunate to have a SSD disk. The performance might not be as tolerable on a traditional hard drive.)

I've managed to solve a 4096x4096 system with 7168 bits of floating point precision. It took about six days. Based on previous estimates of the convergence and system precision loss, the final result should have over 1500 bits of precision, which is far more than needed (accuracy is probably on the order of 80-100 bits for practical applications).

I will post the SAGE code soon. The forum isn't letting me attach it, so I might need to change the file name/format. At any rate, it needs to be cleaned up a bit.

However, I will attach the first 1,000 coefficients of my accelerated 4096x4096 matrix solution. Why only a thousand? Well, two reasons. First of all, based on some meta-analysis I've done (which I'll post in a few weeks when I have the time to write it up properly), I believe only the first 600-700 terms are accurate anyway. For example, if you were wanting to perform a series reversion to get the sexp function, you would pretty much get "noise" after about the 600th term.

The second reason is that the forum is limiting me to 200kB file size for attachments. The full 4096 terms was well over the limit. So I had to truncate the file anyway. As long as I was truncating it, I figured I'd limit it to something reasonable.

Please note, these coefficients are for the power series for the slog residue (i.e., the slog function minus the logarithms at the primary fixed points). I know, "residue" is already a well-defined mathematical concept. I still don't have a better name for it yet. It's what I called it in my original posts on this subject over 11 years ago, so there's no sense changing the name flippantly. I will work on finding a better name. Anyway, calculate the taylor series for the logarithms at the primary fixed points, and add it to this series. For "practical" calculations, just use the logarithms, and add this function. I'll maybe devote a forum post later to demonstrate.

A few weeks ago, I started getting interested in revisiting my accelerated slog solution. The basics of my method are discussed in this thread:

Improving convergence of Andrew's slog

A note on that thread. The inverse Abel matrix approach was not discovered by me. I don't know for sure who first used and published the method, but I'm reasonably certain that my first introduction to the method was through Andrew Robbins, so I give him the credit. As far as I know, however, I was the first to try to accelerate convergence with the method described in the linked thread. Indeed, I don't think I've seen it described anywhere else. If someone knows of another person using this method, please let me know!

Anyway, I bring this all up, because I am once again playing with this solution. I've modified my matrix solver to use Gaussian elimination in order to perform an LU decomposition. Due to memory constraints, I don't actually save the L matrix, and I've modified the code to swap rows of the U matrix to disk. This has allowed me to solve larger and larger systems, without RAM becoming a bottleneck. The downside is a 20%-40% reduction in speed. (I'm fortunate to have a SSD disk. The performance might not be as tolerable on a traditional hard drive.)

I've managed to solve a 4096x4096 system with 7168 bits of floating point precision. It took about six days. Based on previous estimates of the convergence and system precision loss, the final result should have over 1500 bits of precision, which is far more than needed (accuracy is probably on the order of 80-100 bits for practical applications).

I will post the SAGE code soon. The forum isn't letting me attach it, so I might need to change the file name/format. At any rate, it needs to be cleaned up a bit.

However, I will attach the first 1,000 coefficients of my accelerated 4096x4096 matrix solution. Why only a thousand? Well, two reasons. First of all, based on some meta-analysis I've done (which I'll post in a few weeks when I have the time to write it up properly), I believe only the first 600-700 terms are accurate anyway. For example, if you were wanting to perform a series reversion to get the sexp function, you would pretty much get "noise" after about the 600th term.

The second reason is that the forum is limiting me to 200kB file size for attachments. The full 4096 terms was well over the limit. So I had to truncate the file anyway. As long as I was truncating it, I figured I'd limit it to something reasonable.

Please note, these coefficients are for the power series for the slog residue (i.e., the slog function minus the logarithms at the primary fixed points). I know, "residue" is already a well-defined mathematical concept. I still don't have a better name for it yet. It's what I called it in my original posts on this subject over 11 years ago, so there's no sense changing the name flippantly. I will work on finding a better name. Anyway, calculate the taylor series for the logarithms at the primary fixed points, and add it to this series. For "practical" calculations, just use the logarithms, and add this function. I'll maybe devote a forum post later to demonstrate.

~ Jay Daniel Fox