Improving convergence of Andrew's slog
#1
I've already touched on this, but I did want to describe somewhat briefly how I've accelerated convergence of Andrew's slog, with a more detailed description to follow when I have the time.

First of all, let's make the assumption that the solution to the infinite system exists, and furthermore, that the resultant power series has a non-zero radius of convergence. Although heuristically the evidence is fairly strong, we don't know enough to formally prove it. However, like the Riemann hypothesis, we can still make some solid conclusions which we know are true IF the hypothesis is true.

Anyway, based on these assumptions, let's try to figure out how the convergence of the smaller systems works. First, let's start with the infinite system. We'll call the matrix M, and the column vector [1, 0, 0, ...] we'll call B. The coefficients A of the slog are then the solution of the equation MA = B.

Now, below a certain row, say, row 100, let's replace the rows with the corresponding entry from an infinite identity matrix. So rows 1 through 100 will stay the same, but 101 to infinity are going to be changed. In order to do this, we replace the zero in the column vector B with the actual coefficient in A we're trying to solve for. This requires knowing the coefficients in A, obviously.

Now it should be clear how Andrew's slog is converging on the true solution. We're solving for the infinite system, using the modified format the I described above, but instead of replacing the values in B with the true values, we're replacing them with 0's. This introduces errors in the first 100 (or n) coefficients to compensate.

In order to improve convergence, what I've done is taken the coefficients for the two known singularities at 0.31813... + 1.3372357i and its conjugate, and put them in the B vector, rather than 0's. These coefficients aren't the true values from A, but they're much, much, much closer to the true values. This reduces the errors in the final solution by several orders of magnitude.

Notice that we don't actually calculate the solution to the full system. We end up with, for example, a 100 x infinity system. 100 rows, infinity columns. However, in practice, we truncate this down to perhaps 100x2000. And we don't even have to solve the full system. We can precalculate the 100x1900 system, starting at column 101, and subtract from B.

In other words, assume we're given K as a 100x2000 matrix, A as a column vector with 2000 rows, and B as a column vector with 100 rows. Then, using the known values of K, and the approximations for rows 101 through 2000 of A, we solve the new system:

\( \Large K_{\small 1...100} A_{\small 1...100}\ =\ B-\Large K_{\small 101...2000} A_{\small 101...2000} \)

Thus, we only solve a 100x100 system, which actually is solving a 2000x2000 system, with very, very good approximations of the coefficients 101 through 2000. And note that the right hand side can be precalculated in chunks, reducing memory requirements and making it feasible to solve large systems. For example:

\( \Large B^{\normalsize '}\ =\ B-K_{\small 101...500} A_{\small 101...500}-K_{\small 501...1000} A_{\small 501...1000}-K_{\small 1001...1500} A_{\small 1001...1500}-K_{\small 1501...2000} A_{\small 1501...2000} \\
\\[10pt]

\\
A_{\small 1...100}\ =\ K_{\small 1...100}^{\small -1} B^{\normalsize '} \)

Extending further, I can solve a 700x700 system, using a precomputed vector which gives me the solution to a 700x10,000 system, which itself is the solution to a 10,000x10x000 system with approximations for terms 701 to 10,000. As with the original system, the errors stack up about halfway through, so a 700x700 system is only really accurate out to about 300-400 terms. I should be careful about how I say this, because all 700 terms are at least as accurate as the approximating series I would get if I used the coefficients of the two singularities.

The residue is the part that is inaccurate, and by the 300th term, the coefficients seem to be about ten orders of magnitude smaller than the singularities I've removed. And by the 300th coefficient, I'm already dealing with extremely small coefficients as it is. So this 700x700 system is probably accurate to a hefty number of decimal places, at least 40, if not 60 or more, for complex inputs about a unit distance from the origin or less.
~ Jay Daniel Fox
Reply


Messages In This Thread
Improving convergence of Andrew's slog - by jaydfox - 09/28/2007, 03:31 AM

Possibly Related Threads…
Thread Author Replies Views Last Post
  Revisting my accelerated slog solution using Abel matrix inversion jaydfox 22 46,075 05/16/2021, 11:51 AM
Last Post: Gottfried
  sum(e - eta^^k): convergence or divergence? Gottfried 6 19,760 08/17/2010, 11:05 PM
Last Post: tommy1729
  A note on computation of the slog Gottfried 6 19,603 07/12/2010, 10:24 AM
Last Post: Gottfried
  intuitive slog base sqrt(2) developed between 2 and 4 bo198214 1 7,407 09/10/2009, 06:47 PM
Last Post: bo198214
  sexp and slog at a microcalculator Kouznetsov 0 5,885 01/08/2009, 08:51 AM
Last Post: Kouznetsov
  Convergence of matrix solution for base e jaydfox 6 17,779 12/18/2007, 12:14 AM
Last Post: jaydfox
  SAGE code implementing slog with acceleration jaydfox 4 13,477 10/22/2007, 12:59 AM
Last Post: jaydfox
  Dissecting Andrew's slog solution jaydfox 15 35,762 09/20/2007, 05:53 AM
Last Post: jaydfox
  Computing Andrew's slog solution jaydfox 16 36,848 09/20/2007, 03:53 AM
Last Post: andydude



Users browsing this thread: 1 Guest(s)