Computing Abel function at a given center
#1
The solution to the Abel equation A, such that A(e^z)=A(z)+1, can be calculated by solving \( (E-I)\vec{f}=\vec{1} \). Here, E is a Carleman or Bell matrix for the exponential function. I'm still confused on the name, but to be clear, it is the matrix with column vectors as powers of the power series for e^z. The matrix I is the identity matrix, and \( \vec{1} \) is the column vector (1, 0, 0, ...). Solving for \( \vec{f} \), we get the power series for the slog.

Now, to shift to a different center, we could use basic shifting operations, e.g., by multiplying the power series by a suitable Pascal matrix. However, this isn't feasible when dealing with a truncation of the power series. Recentering the series reduces the number of terms in the series that can be considered accurate, due to the radius of convergence.

This bothers me somewhat, because we want to develop the slog at 0, but I'd also like to develop a sexp at 0. The sexp at 0 is the reversion of the slog at 1, and the slog at 0 is the sexp at -1. So no matter what, a recentering will be needed if we solve for the slog at 0.

This got me wondering if we could shift the center before recentering and not lose precision. In other words, if we shift the system before solving, do we get the good accuracy for the same number of terms we would get as if we had solved at the origin, or do we get the same reduced precision we would expect by solving at the origin first and then recentering?

I hope to answer this question through experimentation, but if anyone knows the answer off the top of their heads, please share.

I initially feared that recentering before solving wouldn't help, but further reflection on it makes me hopeful that I will get more precision, because I can shift a larger version of the matrix (for more accuracy), then truncate to the size I'm solving for.

If so, I could solve the slog at z=1, then reverse the series and get the sexp at 0, with more valid terms than I previously could have gotten. I'm actually hoping to get several hundred terms of the "residue" (after subtracting the logarithms at the primary fixed points) to within, say, 0.1%, sufficient to try to solve a small Abel system for the pentalog, just to get a rough idea of what it looks like.


However, going further, I would like to be able to recenter an arbitrary Abel equation to an arbitrary location, as long as it's not a singularity. One motivation for this is to recenter the \( \text{slog}_{\sqrt{2}} \) to z=3.5, or z=3. I suspect that if I recenter it to z=3.5, it will use the regular slog for the fixed point at z=4, because the root test will be dictated by that singularity, and hence the contribution from the singularity at z=2 should get washed out.

If this happens, I'd be delighted. If not, I'd still be delighted if the solution still somehow managed to "choose" the lower fixed point.

If it does choose the closer singularity, then I wonder what will happen at z=3. The center would be equidistant to both singularities, and both singularities would be of the logarithmic variety with only slightly different bases, such that neither series would be able to dwarf the other. What would happen then?

Anyway, these are question I intend to explore this week.
~ Jay Daniel Fox
#2
Yes that is really interesting. However I dont know whether you will come to necessary precision to compare whether a shift before (smaller than the next fixed point) will even yield the same natural Abel function as shift after taking the natural Abel function. I have not even a conjecture about it, but would be really delighted if they yield the same result.

However for a shift greater than the next fixed point I would be quite sure that shift before and shift after give different results.

Edit: The last statement is nearly nonsense, because if the next fixed point is a singularity we can not shift beyond it afterwards.
#3
I finally got back to this, and it appears that shifting the center prior to solving the system produces the same results as shifting the center after solving the system. This means we're limited by the radius of convergence at the origin, which is to say, the system explicitly solves at the origin, not where we recenter it.
~ Jay Daniel Fox
#4
jaydfox Wrote:I finally got back to this, and it appears that shifting the center prior to solving the system produces the same results as shifting the center after solving the system. This means we're limited by the radius of convergence at the origin, which is to say, the system explicitly solves at the origin, not where we recenter it.

Hmm, I never understood this "shifting", maybe I'm compeltely wrong with the following.
In my matrix-method it occured, that I can use a triangular operator (with some advantages for numericla computation) if I rewrite

V(x)~ * Bs = V(s^x) ~

into

V(x/t - 1)~ * Qs = V(s^x/t - 1)~
or
V(x - t)~ * Rs = V(s^x-t)~

where t is the fixpoint of s, and Qs or Rs are triangular.

I don't have then general powers easily(but possibly manageable), but I have better access, for instance to the inverse of the function or for more general s.

Is this equivalent to this "shifting"?

Gottfried
Gottfried Helms, Kassel
#5
Hmm, I'm not sure what the best word for it is, because in my mind, I'm shifting the center of a polynomial (in some math libraries, "shift" is the name of the function that performs this). But in matrix terminology, "shift" has a completely different meaning.

To give a very basic example, we can recenter

\( f(x) = x^2 + 3x - 1 \)

to

\(
\begin{eqnarray}
g(x) & = & f(x+1) \\
& = & (x+1)^2 +3(x+1) - 1 \\
& = & x^2 + 5x + 5
\end{eqnarray}
\)

After this "recentering", we have a new function g(x) which is the same as f(x+1), which means g(0) = f(1). Therefore, our new function g is the same as the old function f, except that it's "centered" at 1, not at 0. (This can get confusing, because the original center is now at -1. The way I keep it straight is that we normally center the logarithm at x=1, which puts the original center at -1. In this case, it's centered at 1, not -1.)

If we want to find f(1), we simply find g(0).

As far as actually calculating the new center, I'm just using a Pascal matrix to perform the recentering. It's equivalent to the Bell matrix of f(x)=x+1, which is after all what I'm trying to do.

I recentered the Abel matrix for exponentiation (Andrew's matrix) to x+1 and solved, and I got the same result as solving the original system and then shifting the power series. This already had me worried, because of what I planned next.

Next I recentered the Abel matrix to x+3, well outside the radius of convergence. I had hoped that the solution would converge properly, being merely centered at x=3. However, I got very large coefficients, which grew as I increased the matrix size. Simply put, it was acting like I was trying to recenter the power series from the original solution (as in, the solution at the origin), which, due to the radius of convergence, gives me bogus coefficients.

I'm not totally dissatisfied with the result. It does help prevent a problem I had been worried about, which is what would happen if I recentered the system on the far side of the singularity: which branch would it "choose"? The simple answer is that I can't recenter it outside the original radius of convergence, thus preventing the problem.
~ Jay Daniel Fox
#6
jaydfox Wrote:To give a very basic example, we can recenter

\( f(x) = x^2 + 3x - 1 \)

to

\(
\begin{eqnarray}
g(x) & = & f(x+1) \\
& = & (x+1)^2 +3(x+1) - 1 \\
& = & x^2 + 5x + 5
\end{eqnarray}
\)

(...)

If we want to find f(1), we simply find g(0).
(...)
Ah, I see, it's as I expected. Then this is just the shift of
V(x)~ * Bs = V(y)~
to
V(x')~ * Qs = V(y')~
with x' = x/t-1 which I also perform by multiplication by the Pascal-matrix. Good to know.
It's the same way, as I described it occasionally, to get "exact terms", when the terms in the columns of the Bs-matrix are badly conditioned.
It is just

V(x)~ * Bs = V(y)~

where

Bs = dV(1/t)*P^-1~ * Q * P ~ * dV(t)

and then using associativity

V(x)~ * dV(1/t) * P^-1~ = V(x/t-1)~ = V(x')~

and then

V(x')~ * Q * P~ * dV(t) = V(y)~
V(x')~ * Q = V(y)~ *dV(1/t) * P^-1 ~
V(x')~ * Q = V(y/t-1)~ = V(y')~

where Q is triangular: for height=1 it contains the Stirling kind 2, and for height=-1 (the inverse) the stirling kind 1 numbers, factorial scaled, such that it performs for base b=exp(1)

V(x)~ * Q = V(exp(x)-1)~

or general base b

V(x)~ * Q = V(b^x-1)~

and its inverse (stirling kind 1)

V(x)~ * Q^-1 = V(log(1+x)/log(b)) ~


So essentially we do the same thing here, well...

Gottfried
Gottfried Helms, Kassel
#7
jaydfox Wrote:I finally got back to this, and it appears that shifting the center prior to solving the system produces the same results as shifting the center after solving the system. This means we're limited by the radius of convergence at the origin, which is to say, the system explicitly solves at the origin, not where we recenter it.

I did some additional testing with the Bell matrix for exponentiation, because the "answer" is well-known. I started with a 26x26 Bell matrix for \( e^x-1 \), and after inverting, I got a power series for \( \ln(x+1) \) that was accurate to machine precision. This is to be expected, since this would represent a simple reversion of a power series.

However, I then calculated the Bell matrix for \( e^x-2 \), which is the inverse of \( \ln(x+2) \). I got bogus coefficients that grew larger as I increased matrix size, just as I had observed with the Abel matrix for exponentiation.

Just to make sure that the issue wasn't that the function I was inverting didn't have a fixed point at 0, I tried the Bell matrix for \( e^{x+\ln(2)}-2 \), which is the inverse of \( \ln(x+2)-\ln(2) \). Once again, I got the bogus coefficients that grew with matrix size.

However, I noticed that if I took the inverse of a submatrix, say the upper-left 15x15 matrix, I got approximately correct answers, though not to machine precision.

It occurred to me then that the infinite summations were involved when computing the recentered matrix system, and I was only getting partial sums. Had I been paying more attention, I would have seen this. At any rate, this gives me hope that I can recenter the system if I proceed carefully. Exactly how to accomplish this is something that will require quite a bit of experimentation, however, so I'll probably wait until early next year to pursue this line of study again.
~ Jay Daniel Fox
#8
jaydfox Wrote:
jaydfox Wrote:I finally got back to this, and it appears that shifting the center prior to solving the system produces the same results as shifting the center after solving the system. This means we're limited by the radius of convergence at the origin, which is to say, the system explicitly solves at the origin, not where we recenter it.

I did some additional testing with the Bell matrix for exponentiation, because the "answer" is well-known. I started with a 26x26 Bell matrix for \( e^x-1 \), and after inverting, I got a power series for \( \ln(x+1) \) that was accurate to machine precision. This is to be expected, since this would represent a simple reversion of a power series.

However, I then calculated the Bell matrix for \( e^x-2 \), which is the inverse of \( \ln(x+2) \). I got bogus coefficients that grew larger as I increased matrix size, just as I had observed with the Abel matrix for exponentiation.

Just to make sure that the issue wasn't that the function I was inverting didn't have a fixed point at 0, I tried the Bell matrix for \( e^{x+\ln(2)}-2 \), which is the inverse of \( \ln(x+2)-\ln(2) \). Once again, I got the bogus coefficients that grew with matrix size.

However, I noticed that if I took the inverse of a submatrix, say the upper-left 15x15 matrix, I got approximately correct answers, though not to machine precision.

It occurred to me then that the infinite summations were involved when computing the recentered matrix system, and I was only getting partial sums. Had I been paying more attention, I would have seen this. At any rate, this gives me hope that I can recenter the system if I proceed carefully. Exactly how to accomplish this is something that will require quite a bit of experimentation, however, so I'll probably wait until early next year to pursue this line of study again.

Wow. That's exactly what I'm doing with my matrix-notation, howerver I didn't consider your special cases.
Say U is the Bellmatrix for e^x-1, which is in my version the factorial scaled Stirling-matrix.

So V(x)~ * U = V(e^x-1) ~

Since the pascal-matrix P applies addition +1 to the argument of a powerseries-vector, I have

V(x)~ * U*P~ = V(e^x)~

where U*P is the square Bell-matrix.

V(x)~ -> V(log(1+x))~
is then performed by U^-1, which contains the Stirling-numbers 1'st kind:
V(x)~ * U^-1 = V(log(1+x))~
This inverse can be computed exactly, and since it is triangular, also its powers.
To accomplish
V(x)~ -> V(e^x-2)
we have to use
V(x)~ * (U * P^-1 ~) = V(e^x-2)
and (U * P^-1) is no more trianguler; thus its powers imply to evaluate series, (possibly divergent series) for each entry.
Again the inverse performs the V(x)~ -> V(log(2+x))~ so we had
V(x)~ * (U * P^-1~ )^-1 = V(log(2+x))~
But the computation of entries of the inverse does not converge to final values by increasing the size, at least not by the ordinary numerical inversion-procedure.
The change of coefficients, however, can be better managed if one inverts (U*P^-1 ~) by inversion of its factors,
(P~ * U^-1)
The computation of this matrix involves then divergent summation of series, though of alternating sign, so we may accelerate convergence by Euler-sum - while we have to apply different orders of Eulersum for each entry! Anyway, this way we might approximate a reasonable approximation for the true inverse.

Numerically it would be better, to use the associativity

( V(x)~ * P~) * U^-1 = V(log(2+x))~
= V(x+1) ~ * U^-1 = V(log(2+x))~

and one can then numerically deal with the truncated series of powers of (x+1) and the coefficients of the ordinary Stirling-matrix of first kind which makes the Eulersum much more convenient, and the approximation would be better controllable.

Gottfried
Gottfried Helms, Kassel
#9
jaydfox Wrote:Once again, I got the bogus coefficients that grew with matrix size.

Well first of all, the Abel function of decremented exponentials \( e^x-1 \) is undefined at 0. This is because 0 is a fixed point of \( e^x-1 \). Since 0 is a fixed point, \( A(f(0)) = A(0) + 1 \) so \( A(0) = A(0) + 1 \) so 0 = 1 which is unsolvable. You can't find a solution at the origin, so maybe thats why It wasn't working. Another reason is that maybe since you were so close to zero, it acted like a singularity limiting the radius of convergence. Also, did you try this method for \( \ln(x+2)-\ln(2) \)?

Andrew Robbins
#10
Hmm, I think we crossed wires there. I was doing the Abel matrix for \( e^x \), centered at x=0, which works fine.

I then recentered the Abel matrix of \( e^x \) to x=1, which is not a fixed point and which is within the radius of convergence. It was at x=3 that things broke down, as this is well outside the radius of convergence.

For comparison's sake, I then then inverted the Bell matrix of \( e^x-1 \), because the inverse has a logarithmic singularity, like the solution to the Abel matrix. I then tried recentering to x=2 and inverting. This is on the radius of convergence, which would imply potential divergence, which is what I was observing.

However, we should be able to solve at points other than x=1, or so I thought. It took a little tinkering to realize that the problem was that I was both pre- and post-multiplying a lower triangular matrix by uppertriangular matrices, which "mixed up" the results sufficiently that the resulting matrix is only an approximation of the infinite system. Because it's an approximation, the coefficients in the right-most columns of a finite truncation are wildly inaccurate, causing the entire inversion to fail. However, a truncation of a finite truncation dampens the effect of the inaccuracies, which allowed me to see convergence.


Getting back to the Abel matrix for \( e^x \), the simple solution to the divergence problem is either to determine a way to calculate with coefficients of the infinite matrix without resorting to approximations, or to start with a system far larger than the system I intend to solve, and move the center in small steps, truncating along the way. Either way, steps smaller than the radius of convergence must be used.
~ Jay Daniel Fox


Possibly Related Threads…
Thread Author Replies Views Last Post
  Quickest way to compute the Abel function on the Shell-Thron boundary JmsNxn 1 1,242 10/10/2023, 05:17 AM
Last Post: leon
  FractionalIteration now in Wolfram Function Repository Daniel 0 449 07/10/2023, 07:16 AM
Last Post: Daniel
  Terse Schroeder & Abel function code Daniel 1 1,053 10/16/2022, 07:03 AM
Last Post: Daniel
Question Computing Kneser's Super Logarithm and its Analytic Continuation Catullus 2 1,692 07/10/2022, 04:04 AM
Last Post: Catullus
Question Computing Integer Tetrations Catullus 5 3,076 06/10/2022, 10:59 PM
Last Post: JmsNxn
  Revisting my accelerated slog solution using Abel matrix inversion jaydfox 22 45,746 05/16/2021, 11:51 AM
Last Post: Gottfried
  An incremental method to compute (Abel) matrix inverses bo198214 3 16,084 07/20/2010, 12:13 PM
Last Post: Gottfried
  SAGE code for computing flow matrix for exp(z)-1 jaydfox 4 16,857 08/21/2009, 05:32 PM
Last Post: jaydfox
  computing teh last digits without computing the number deepinlife 3 11,405 02/24/2009, 09:09 AM
Last Post: deepinlife
  Computing Andrew's slog solution jaydfox 16 36,773 09/20/2007, 03:53 AM
Last Post: andydude



Users browsing this thread: 1 Guest(s)