Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Computing Andrew's slog solution
#11
Andrew, I got you a present:

   

That's the first 60 odd derivatives of your slog, base e. They're scaled, of course.

To create this graph, first I solved the 400x400 matrix. Then I created a RealField with 5120 bits of precision (I probably only needed about 3000 bits or so...). Then, for each x point I wanted to graph, I calculated samples spaced 1/32678 apart. (I think next time I'll put them closer together, since I had the precision available.) Actually, I picked y points, then found x with slog, rather than try to solve the inverse. Because my x points weren't spaced evenly, I used divided differences to approximate the derivatives, and rather than multiply by k factorial, I multiplied by 2.46^k. That would explain the scaling.

Anyway, just a few derivatives later, I lost concavity. By the 67th odd derivative, I hit the x-axis, and it went haywire from there. I'm not sure if the derivatives started to go haywire because my sample points are spaced too far apart, or if it's because I only used a 400x400 matrix.
~ Jay Daniel Fox
Reply
#12
While not a proof, I think this makes it fairly likely that all the odd derivatives are positive/convex. I haven't tested yet, but I'm willing to bet they're all log-convex as well. I haven't tried to formally prove it, but I'm pretty sure that these facts define a uniqueness constraint. I.e., only one solution can have all its odd derivatives be positive like this and still manage to go through all the correct y-values for integer x.
~ Jay Daniel Fox
Reply
#13
By the way, here's a comparison of the root-test for the 150-, 250-, and 400-term solutions:

   

The first thing to notice is that it would appear that there is a radius of convergence. After all, the root test seems to be asymptotic. The second thing to notice is that as I increased the number of terms, the values for the root test slowly climbed. So the asymptote would appear to be higher than what we see already at 400 terms. This is seen more easily in a detailed view:

   

But what's really weird is that, naively, the radius of convergence would appear to be at most 1/0.71 or so, about 1.4. However, the function behaves very well for real values up to about 2.4. If you view the partial sums of the series, they begin to oscillate wildly, dozens of orders of magnitude too large in absolute value. And yet, by the final term of the sequence, they settle on the correct value. Try it for yourself. Pop the last coefficient off the power series and check the radius of "good behavior", versus the original series.

I'm honestly blown away by this behavior. The series would seem to converge well outside the naive radius of convergence.

My pet theory is that the radius of convergence is around both 0 and 1, so we can go as low as -1.4 and as high as 2.4. The numbers seem to bear this out. -1.45 and 2.45 are both a few orders of magnitude too large, and -1.4 and 2.4 are both well-behaved (but starting to significant errors).

However, I'm wondering what happens if we solve even larger systems. What happens at 500, 600, 1000, 2000, 10000 terms? A million terms? Obviously there are practical limits, but can we answer these questions theoretically?
~ Jay Daniel Fox
Reply
#14
jaydfox Wrote:But what's really weird is that, naively, the radius of convergence would appear to be at most 1/0.71 or so, about 1.4. However, the function behaves very well for real values up to about 2.4. If you view the partial sums of the series, they begin to oscillate wildly, dozens of orders of magnitude too large in absolute value. And yet, by the final term of the sequence, they settle on the correct value. Try it for yourself. Pop the last coefficient off the power series and check the radius of "good behavior", versus the original series.

I'm honestly blown away by this behavior. The series would seem to converge well outside the naive radius of convergence.

My pet theory is that the radius of convergence is around both 0 and 1, so we can go as low as -1.4 and as high as 2.4. The numbers seem to bear this out. -1.45 and 2.45 are both a few orders of magnitude too large, and -1.4 and 2.4 are both well-behaved (but starting to significant errors).
Actually, this makes sense. The equation we solved was at two locations, not at one as with a traditional power series. In other words, the system of equations we solved was set up to ensure that we got exact values at x=0 and x=1, not only for the y value, but for the first k derivatives as well. We could do a polynomial shift to center the power series at x=1, and we'd probably observe a similar root test. (Note to self: test this). Therefore, there could be a radius around both points. I'm not sure if this would imply a figure-8 shape, or an ellipse with two foci. Worth testing. (The 150-term graph in Andrew's paper on page 18 seems to imply a figure-8 type of shape, though the image didn't compress well, so I can't make out any labels for reference.)

However, I'm basing all this off observations made from the 400-term series. Perhaps it's just a coincidence. More testing needed...
~ Jay Daniel Fox
Reply
#15
The more I think about it, the more I realize that the radius should act like a normal radius (i.e., be a single circle). The fact that the series seems to converge well outside the radius is an artifact of the truncation of the series. In other words, the partial sums act divergently, but when we reach the last term, the series "magically" converges over an extended range around z=1.

However, the infinite series (if it exists, which I agree with Andrew seems likely) has no last term, so this won't happen, not in a traditional sense, anyway. Therefore, we should look at the behavior of the penultimate partial sum of any particular truncation to approximate the true radius of convergence.
~ Jay Daniel Fox
Reply
#16
jaydfox Wrote:The fact that the series seems to converge well outside the radius is an artifact of the truncation of the series. In other words, the partial sums act divergently, but when we reach the last term, the series "magically" converges over an extended range around z=1.

Yes, I also thought that this was the explanation for the phenomenon.

Quote:Therefore, we should look at the behavior of the penultimate partial sum of any particular truncation to approximate the true radius of convergence.
I dont get what you mean, care to explain?

PS: Anyway nice artwork, your pictures Smile Though Andrew seems to have not that much time in the moment to appreciate it ...
Reply
#17
Thank you Jay Smile I like the present.

Andrew Robbins
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  Revisting my accelerated slog solution using Abel matrix inversion jaydfox 21 5,502 02/09/2019, 02:25 PM
Last Post: sheldonison
  A note on computation of the slog Gottfried 6 8,993 07/12/2010, 10:24 AM
Last Post: Gottfried
  Improving convergence of Andrew's slog jaydfox 19 23,847 07/02/2010, 06:59 AM
Last Post: bo198214
  intuitive slog base sqrt(2) developed between 2 and 4 bo198214 1 3,356 09/10/2009, 06:47 PM
Last Post: bo198214
  SAGE code for computing flow matrix for exp(z)-1 jaydfox 4 7,698 08/21/2009, 05:32 PM
Last Post: jaydfox
  computing teh last digits without computing the number deepinlife 3 5,402 02/24/2009, 09:09 AM
Last Post: deepinlife
  sexp and slog at a microcalculator Kouznetsov 0 2,973 01/08/2009, 08:51 AM
Last Post: Kouznetsov
  Convergence of matrix solution for base e jaydfox 6 8,248 12/18/2007, 12:14 AM
Last Post: jaydfox
  Computing Abel function at a given center jaydfox 10 11,289 11/30/2007, 06:44 PM
Last Post: andydude
  SAGE code implementing slog with acceleration jaydfox 4 6,695 10/22/2007, 12:59 AM
Last Post: jaydfox



Users browsing this thread: 1 Guest(s)