An initial testing with the unaccelerated matrix solver shows that the has a singularity with root test of 0.5, indicating (as expected) a singularity at z=2.

As expected, and the terms of the series appear to be roughly converging on the power series of , which is what we'd naively predict for the immediate neighborhood of the fixed point, and it also would fit the rslog as defined by Henryk.

I started with a 50x50 matrix, just as a "sanity test", then moved up to a 500x500 system. Precision was very good: a solution with 1024 bits of precision was within 1020 bits of a solution with 1200 bits, indicating that the matrix solver was able to avoid any significant loss of precision.

Next I will try the accelerated version, on the assumption that the singularity at z=2 does in fact converge on the aforementioned logarithm.

Here's the tricky part, however. The location of the logarithm is easy to verify, based on the root test. However, the base of the logarithm is harder to verify. For example, and are easy to tell apart, because by the 200th term, the difference in magnitude of the coefficients is about 1.005^200, or about a factor of e, 2.72. Going further into the series only makes it easier to tell the difference, despite the increased computaitonal requirements.

However, and differ by a constant 1.005 factor, making discrimination between the two far more difficult. Going further into the series does not help in discriminating, outside of reducing the relative size of the terms of the "residue" after removing the singularity.

I bring this up, because the difference between the rslogs of the smaller and larger real fixed points for base sqrt(2) is a very small difference. Therefore, we could probably use the base of the upper fixed point (1/ln(4)) and still get a decent amount of acceleration in convergence, and we would need careful analysis to be able to show that the base for the lower fixed point (ln(2)) is indeed the correct one, assuming that either is correct.

As expected, and the terms of the series appear to be roughly converging on the power series of , which is what we'd naively predict for the immediate neighborhood of the fixed point, and it also would fit the rslog as defined by Henryk.

I started with a 50x50 matrix, just as a "sanity test", then moved up to a 500x500 system. Precision was very good: a solution with 1024 bits of precision was within 1020 bits of a solution with 1200 bits, indicating that the matrix solver was able to avoid any significant loss of precision.

Next I will try the accelerated version, on the assumption that the singularity at z=2 does in fact converge on the aforementioned logarithm.

Here's the tricky part, however. The location of the logarithm is easy to verify, based on the root test. However, the base of the logarithm is harder to verify. For example, and are easy to tell apart, because by the 200th term, the difference in magnitude of the coefficients is about 1.005^200, or about a factor of e, 2.72. Going further into the series only makes it easier to tell the difference, despite the increased computaitonal requirements.

However, and differ by a constant 1.005 factor, making discrimination between the two far more difficult. Going further into the series does not help in discriminating, outside of reducing the relative size of the terms of the "residue" after removing the singularity.

I bring this up, because the difference between the rslogs of the smaller and larger real fixed points for base sqrt(2) is a very small difference. Therefore, we could probably use the base of the upper fixed point (1/ln(4)) and still get a decent amount of acceleration in convergence, and we would need careful analysis to be able to show that the base for the lower fixed point (ln(2)) is indeed the correct one, assuming that either is correct.

~ Jay Daniel Fox