10/22/2007, 12:59 AM
Quick update: it appears as though the relative error is fairly predictable, at least in the range of system sizes I'm working with. For a 900x900 system with 5120 bits of precision, the relative precision was about 3880 bits, give or take. That leaves about 1240 bits of garbage. The ratio is nearly the same, as 1100/800 and 1240/900 are both about 11/8, or interestingly enough, roughly the distance of the primary fixed points to the origin.
For a 1000x1000 system, then, I'd predict about 1375 bits of garbage in my resulting coefficients, such that I'd need about 3380 bits for my solver routine to get about 2000 bits of precision per term (at the radius of convergence). For 1000 terms, I'd need an extra 10 bits to handle the accumulation of errors (which is trivial at this stage), so we'll just call it 3400 bits that I need.
Memory-wise, however, I think I can get 4608 bits, or 9x512. I can run another system at 4352 bits (256 bits less) for comparison purposes, to make sure I know "precisely" how much relative precision I have. The 4352 bit system should hopefully get roughly 2950 bits of relative precision, even after subtracting out 10 bits for accumulation errors. And of course, I'd use the 4608 bit solution in practice, for about 3200 bits.
I could probably take this process as far as a 1200x1200 system, depending on how much relative precision I'm willing to give up. However, I think I'm going to call an end to this "arms race" after I calculate the 1000x1000 systems. It's time for me to stop calculating, and time to start analyzing and filling in the holes in my knowledge. I also need to provide an updated version of this library, especially graphing code. My original Runge-Kutta graphing code is slow and difficult to modify. It's also a few hundred lines of spaghetti code, to logarithmicize each interval manually. That needs to be automated in a loop. It also needs code to calculate a denser grid around zero, to expose more detail near minus infinity when we logarithmicize.
For a 1000x1000 system, then, I'd predict about 1375 bits of garbage in my resulting coefficients, such that I'd need about 3380 bits for my solver routine to get about 2000 bits of precision per term (at the radius of convergence). For 1000 terms, I'd need an extra 10 bits to handle the accumulation of errors (which is trivial at this stage), so we'll just call it 3400 bits that I need.
Memory-wise, however, I think I can get 4608 bits, or 9x512. I can run another system at 4352 bits (256 bits less) for comparison purposes, to make sure I know "precisely" how much relative precision I have. The 4352 bit system should hopefully get roughly 2950 bits of relative precision, even after subtracting out 10 bits for accumulation errors. And of course, I'd use the 4608 bit solution in practice, for about 3200 bits.
I could probably take this process as far as a 1200x1200 system, depending on how much relative precision I'm willing to give up. However, I think I'm going to call an end to this "arms race" after I calculate the 1000x1000 systems. It's time for me to stop calculating, and time to start analyzing and filling in the holes in my knowledge. I also need to provide an updated version of this library, especially graphing code. My original Runge-Kutta graphing code is slow and difficult to modify. It's also a few hundred lines of spaghetti code, to logarithmicize each interval manually. That needs to be automated in a loop. It also needs code to calculate a denser grid around zero, to expose more detail near minus infinity when we logarithmicize.
~ Jay Daniel Fox