Hmm, I'm not sure what the best word for it is, because in my mind, I'm shifting the center of a polynomial (in some math libraries, "shift" is the name of the function that performs this). But in matrix terminology, "shift" has a completely different meaning.

To give a very basic example, we can recenter

to

After this "recentering", we have a new function g(x) which is the same as f(x+1), which means g(0) = f(1). Therefore, our new function g is the same as the old function f, except that it's "centered" at 1, not at 0. (This can get confusing, because the original center is now at -1. The way I keep it straight is that we normally center the logarithm at x=1, which puts the original center at -1. In this case, it's centered at 1, not -1.)

If we want to find f(1), we simply find g(0).

As far as actually calculating the new center, I'm just using a Pascal matrix to perform the recentering. It's equivalent to the Bell matrix of f(x)=x+1, which is after all what I'm trying to do.

I recentered the Abel matrix for exponentiation (Andrew's matrix) to x+1 and solved, and I got the same result as solving the original system and then shifting the power series. This already had me worried, because of what I planned next.

Next I recentered the Abel matrix to x+3, well outside the radius of convergence. I had hoped that the solution would converge properly, being merely centered at x=3. However, I got very large coefficients, which grew as I increased the matrix size. Simply put, it was acting like I was trying to recenter the power series from the original solution (as in, the solution at the origin), which, due to the radius of convergence, gives me bogus coefficients.

I'm not totally dissatisfied with the result. It does help prevent a problem I had been worried about, which is what would happen if I recentered the system on the far side of the singularity: which branch would it "choose"? The simple answer is that I can't recenter it outside the original radius of convergence, thus preventing the problem.

To give a very basic example, we can recenter

to

After this "recentering", we have a new function g(x) which is the same as f(x+1), which means g(0) = f(1). Therefore, our new function g is the same as the old function f, except that it's "centered" at 1, not at 0. (This can get confusing, because the original center is now at -1. The way I keep it straight is that we normally center the logarithm at x=1, which puts the original center at -1. In this case, it's centered at 1, not -1.)

If we want to find f(1), we simply find g(0).

As far as actually calculating the new center, I'm just using a Pascal matrix to perform the recentering. It's equivalent to the Bell matrix of f(x)=x+1, which is after all what I'm trying to do.

I recentered the Abel matrix for exponentiation (Andrew's matrix) to x+1 and solved, and I got the same result as solving the original system and then shifting the power series. This already had me worried, because of what I planned next.

Next I recentered the Abel matrix to x+3, well outside the radius of convergence. I had hoped that the solution would converge properly, being merely centered at x=3. However, I got very large coefficients, which grew as I increased the matrix size. Simply put, it was acting like I was trying to recenter the power series from the original solution (as in, the solution at the origin), which, due to the radius of convergence, gives me bogus coefficients.

I'm not totally dissatisfied with the result. It does help prevent a problem I had been worried about, which is what would happen if I recentered the system on the far side of the singularity: which branch would it "choose"? The simple answer is that I can't recenter it outside the original radius of convergence, thus preventing the problem.

~ Jay Daniel Fox