12/13/2007, 02:05 AM

I've taken the time to analyze the rate of convergence of various solutions of truncations of the matrix method (Andrew's solution, essentially the Abel matrix solution).

I started with the accelerated version, mainly because I'm confident that it converges on the same solution as the natural solution, but fast enough to be useful for numerical analysis.

For all the graphs that follow, I found the accelerated solutions of the following matrix sizes: 16, 24, 32, 48, 64, 88, 128, 184, 256, 360, 512. Note that these are half-powers of 2, from 2^4 to 2^9, and the non-integer powers (e.g., 2^6.5=90.51) have been rounded to a multiple of 8.

So, first things first, here's a graph of the accumulated absolute error in the coefficients. Note that this will greatly exceed the maximum error for the slog of any point on the unit disc, so this isn't a measure of error per se, but it's a good indicator, because the maximum error should be within one or two orders of magnitude of the accumulated absolute error.

A few things to mention. The scale on the y-axis is in bits. The red lines are integer powers of 2, and the blue lines are the rounded half-powers. The red line at the top is for the 16x16 solution, and the red line at the bottom is for the 512x512 solution.

Essentially, each time you double the system size (double the rows and the columns, so yes, four times as many matrix elements), you get an additional 6.8 bits of precision, give or take.

I find it interesting, because 6.8 is very close to the periodicity of the terms of the singularities I use for the accelerated solution. (Actually, the periodicity is half that, about 3.36 or so, but if you look at doubling the number of matrix elements, then the relationship holds).

On top of that, the precision lost when solving the accelerated system is about 1.375 bits per row or column, and 1.375 is very close to 1.374557, the distance of the primary singularities from the origin. Coincidence? I'd like to know!

Anyway, to make it easier to see that the relationship is almost linear (1 bit of rows for 6.8 bits of precision), here's a chart with logarithmic scales on both axes. So, 4 bits of rows means 16 rows, and 9 bits means 512 rows.

I started with the accelerated version, mainly because I'm confident that it converges on the same solution as the natural solution, but fast enough to be useful for numerical analysis.

For all the graphs that follow, I found the accelerated solutions of the following matrix sizes: 16, 24, 32, 48, 64, 88, 128, 184, 256, 360, 512. Note that these are half-powers of 2, from 2^4 to 2^9, and the non-integer powers (e.g., 2^6.5=90.51) have been rounded to a multiple of 8.

So, first things first, here's a graph of the accumulated absolute error in the coefficients. Note that this will greatly exceed the maximum error for the slog of any point on the unit disc, so this isn't a measure of error per se, but it's a good indicator, because the maximum error should be within one or two orders of magnitude of the accumulated absolute error.

A few things to mention. The scale on the y-axis is in bits. The red lines are integer powers of 2, and the blue lines are the rounded half-powers. The red line at the top is for the 16x16 solution, and the red line at the bottom is for the 512x512 solution.

Essentially, each time you double the system size (double the rows and the columns, so yes, four times as many matrix elements), you get an additional 6.8 bits of precision, give or take.

I find it interesting, because 6.8 is very close to the periodicity of the terms of the singularities I use for the accelerated solution. (Actually, the periodicity is half that, about 3.36 or so, but if you look at doubling the number of matrix elements, then the relationship holds).

On top of that, the precision lost when solving the accelerated system is about 1.375 bits per row or column, and 1.375 is very close to 1.374557, the distance of the primary singularities from the origin. Coincidence? I'd like to know!

Anyway, to make it easier to see that the relationship is almost linear (1 bit of rows for 6.8 bits of precision), here's a chart with logarithmic scales on both axes. So, 4 bits of rows means 16 rows, and 9 bits means 512 rows.

~ Jay Daniel Fox