Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Convergence of matrix solution for base e
#1
I've taken the time to analyze the rate of convergence of various solutions of truncations of the matrix method (Andrew's solution, essentially the Abel matrix solution).

I started with the accelerated version, mainly because I'm confident that it converges on the same solution as the natural solution, but fast enough to be useful for numerical analysis.

For all the graphs that follow, I found the accelerated solutions of the following matrix sizes: 16, 24, 32, 48, 64, 88, 128, 184, 256, 360, 512. Note that these are half-powers of 2, from 2^4 to 2^9, and the non-integer powers (e.g., 2^6.5=90.51) have been rounded to a multiple of 8.

So, first things first, here's a graph of the accumulated absolute error in the coefficients. Note that this will greatly exceed the maximum error for the slog of any point on the unit disc, so this isn't a measure of error per se, but it's a good indicator, because the maximum error should be within one or two orders of magnitude of the accumulated absolute error.

   

A few things to mention. The scale on the y-axis is in bits. The red lines are integer powers of 2, and the blue lines are the rounded half-powers. The red line at the top is for the 16x16 solution, and the red line at the bottom is for the 512x512 solution.

Essentially, each time you double the system size (double the rows and the columns, so yes, four times as many matrix elements), you get an additional 6.8 bits of precision, give or take.

I find it interesting, because 6.8 is very close to the periodicity of the terms of the singularities I use for the accelerated solution. (Actually, the periodicity is half that, about 3.36 or so, but if you look at doubling the number of matrix elements, then the relationship holds).

On top of that, the precision lost when solving the accelerated system is about 1.375 bits per row or column, and 1.375 is very close to 1.374557, the distance of the primary singularities from the origin. Coincidence? I'd like to know!

Anyway, to make it easier to see that the relationship is almost linear (1 bit of rows for 6.8 bits of precision), here's a chart with logarithmic scales on both axes. So, 4 bits of rows means 16 rows, and 9 bits means 512 rows.

   
~ Jay Daniel Fox
Reply


Messages In This Thread
Convergence of matrix solution for base e - by jaydfox - 12/13/2007, 02:05 AM

Possibly Related Threads...
Thread Author Replies Views Last Post
  The Promised Matrix Add On; Abel_M.gp JmsNxn 2 554 08/21/2021, 03:18 AM
Last Post: JmsNxn
  Revisting my accelerated slog solution using Abel matrix inversion jaydfox 22 30,946 05/16/2021, 11:51 AM
Last Post: Gottfried
  sum(e - eta^^k): convergence or divergence? Gottfried 6 15,601 08/17/2010, 11:05 PM
Last Post: tommy1729
  An incremental method to compute (Abel) matrix inverses bo198214 3 13,084 07/20/2010, 12:13 PM
Last Post: Gottfried
  Improving convergence of Andrew's slog jaydfox 19 41,524 07/02/2010, 06:59 AM
Last Post: bo198214
  SAGE code for computing flow matrix for exp(z)-1 jaydfox 4 13,317 08/21/2009, 05:32 PM
Last Post: jaydfox
  Matrix-method: compare use of different fixpoints Gottfried 23 39,550 11/30/2007, 05:24 PM
Last Post: andydude
  Dissecting Andrew's slog solution jaydfox 15 28,888 09/20/2007, 05:53 AM
Last Post: jaydfox
  Computing Andrew's slog solution jaydfox 16 29,422 09/20/2007, 03:53 AM
Last Post: andydude



Users browsing this thread: 1 Guest(s)