Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Cheta with base-change: preliminary results
#1
As has been discussed elsewhere, my change of base formula allows one to find fractional iterations of exponentiation for real bases greater than eta, provided one sticks to the reals.

Well, in order to find complex values, I decided there are two equally valid approaches. One is to create a very small (as in spacing) one-dimensional grid of points, with which one can approximate derivatives and hence create a power series expansion at a given real point. From there, one can then find complex iterations.

The other approach is to create a rather large grid (interval of unit length), and then use an appropriate slog function to get back another unit interval with a slightly perturbed grid. Fourier analysis will then give directly the cyclic "wobble" function. Combining with wobble function with the inverse of the aforementioned slog, one can get values for complex iterations.

Since I'm not sure how much accuracy the second method needs, and my best numerical solution to Andrew's slog is accurate to perhaps 25 decimal digits on the real line, I decided to try to the first approach: create a tiny grid and extract a power series from it.

My initial results are rather bizarre. I'm slowly building up until system memory becomes a limiting factor. So far, I've gotten the first 100 terms or so (128, but not sure about the last few). A root test seems to indicate that the radius of convergence is at most 0.5, perhaps even smaller.

This is frustrating, because this means I can't even perform an integer iteration! I might be able to perform a half iteration in both directions and then an integer iteration from the lower to see if it gets me to the upper. But that's not terribly helpful.

The function is continuous on the real line, so I'm lead to assume that there is a singularity somewhere just off the real line. But until I have a lot more terms, I probably won't be able to effectively track it down. Also, the root test was still slowly worsening before I ran out of terms, so the radius might even be less than 0.5!

Anyway, just a heads up. I'll post more details when I can confirm the accuracy of my results.

For the curious, I first found my change of base constant for x=1. I found that log(log(log(exp(exp(exp(1)))+1)+1)+1) is about 1.33030160653615. I then iteratively calculated log(x+1), performing 500 iterations. Using this value (about 0.00400318938047073), I then generated a 150-term interpolation function (using a FMA matrix based on Andrew's scripts). Using the interpolation funciton, I created a 129-point grid with spacing 2^[-108].

For each point in the grid, I then iteratively applied exp(x)-1, for 500 iterations. I then calculated log(log(log(exp(exp(exp(x)-1)-1)-1))), which gets me back to 1.0 for the middle point of the grid.

I then used the 129 points to calculate the first 128 derivatives. The last few derivatives have large errors (order of h^2, h^4, etc., where h is the grid spacing). I'm running a second calculation with a smaller grid spacing and more iterations, just to rule out numerical inaccuracies.

Assuming the accuracy is good, I'll try to get more terms, perhaps twice as many if I can manage it. I have to use tens of thousands of bits of precision, depending on the grid size and spacing. My rule of thumb is that I need enough bits for the grid size times the grid spacing in bits ( 128 * 108 bits in the previous example) plus more for desired precision (an extra 20% or more, to be safe).
~ Jay Daniel Fox
Reply
#2
(08/11/2009, 05:16 PM)jaydfox Wrote: Well, in order to find complex values, I decided there are two equally valid approaches. One is to create a very small (as in spacing) one-dimensional grid of points, with which one can approximate derivatives and hence create a power series expansion at a given real point. From there, one can then find complex iterations.

So the way you explained to Sheldon - counting how often the exp winds around 0 and applying the corresponding branch logarithm - does not turn out to be useful?
Reply
#3
(08/11/2009, 06:45 PM)bo198214 Wrote:
(08/11/2009, 05:16 PM)jaydfox Wrote: Well, in order to find complex values, I decided there are two equally valid approaches. One is to create a very small (as in spacing) one-dimensional grid of points, with which one can approximate derivatives and hence create a power series expansion at a given real point. From there, one can then find complex iterations.

So the way you explained to Sheldon - counting how often the exp winds around 0 and applying the corresponding branch logarithm - does not turn out to be useful?
Well, useful in a theoretical sense, I'm sure, but in practice, the number of windings quickly becomes super-exponential! (Gee, I didn't see that coming... Wink ) So from a practical standpoint, keeping track of windings probably is not very useful. But, I can certainly give it a try and see what happens!

Er, after I exhaust the current approach I'm following... I'm so far only using about 200 MB of RAM, and I have about 1.2 GB to work with (new machine, not as much RAM), so I think I can get at least 256 terms, more if I can get away with using a larger grid spacing without sacrificing too much accuracy.
~ Jay Daniel Fox
Reply
#4
(08/11/2009, 05:16 PM)jaydfox Wrote: I then used the 129 points to calculate the first 128 derivatives. The last few derivatives have large errors (order of h^2, h^4, etc., where h is the grid spacing). I'm running a second calculation with a smaller grid spacing and more iterations, just to rule out numerical inaccuracies.
I ran the second test, and it looks like the first run was pretty accurate. For the second test, I used 1000 iterations instead of 500, to minimize any imprecision in the interpolation polynomial (though I'm sure that even 200 iterations would suffice, but I'm being conservative here).

I used a grid spacing of 2^-128, or 20 bits smaller, so about 40 bits more accuracy for the 128th derivative, and considerably better accuracy for lower order derivatives.

When I compare the first results with the second, 128th terms of the Taylor series differ by about 10^-20, which is pretty incredible, considering that the 128th term is about 8.92033858123040e38. Thus, the accuracy was about one part in 10^59, for the -108 bit spacing, and thus considerably better for the -128 bit spacing.

Accuracy increases thereafter, so that most of the terms had about 300 decimal digits of accuracy when I was using the -108 bit spacing.

Unfortunately, this means my observation about the apparent radius of convergence was correct. It wasn't a fluke of the calculations. I'll try to identify a potential singularity, but assuming I can't (only 128 terms, after all), I'll try to get more terms.

I'll attach the coefficients shortly, if anyone is curious to see them.
~ Jay Daniel Fox
Reply
#5
(08/11/2009, 05:16 PM)jaydfox Wrote: For the curious, I first found my change of base constant for x=1. I found that log(log(log(exp(exp(exp(1)))+1)+1)+1) is about 1.33030160653615. I then iteratively calculated log(x+1), performing 500 iterations. Using this value (about 0.00400318938047073), I then generated a 150-term interpolation function (using a FMA matrix based on Andrew's scripts). Using the interpolation funciton, I created a 129-point grid with spacing 2^[-108].

If you have the interpolating polynomial, cant you compute the derivatives directly from it (instead of computing it numerically as it seems you do in the moment)?
Reply
#6
(08/11/2009, 08:11 PM)bo198214 Wrote: If you have the interpolating polynomial, cant you compute the derivatives directly from it (instead of computing it numerically as it seems you do in the moment)?
Sadly, no. If you will recall, terms calculated with the FMA will initially seem to converge, but then diverge to infinity. However, the terms can be calculated by iterative means, converging in a well-defined manner, so the sums exist. I suppose this might be a form of Ramanujan summation, but I'm not savvy enough to properly analyze this.

I suppose I could find the exact coefficients, but I must do so numerically, and this involves performing the same set of mathematical acrobatics: calculate a grid and numerically extract the derivatives. I can only get a few hundred, and I would need thousands to be practical, which unfortunately would require a super-computer. So I settle for interpolating when z is rather small (about 0.004 after 500 iterations), then iterate back up to where I need it to be. This is, of course, time consuming, but at least it's practical.
~ Jay Daniel Fox
Reply
#7
(08/11/2009, 08:45 PM)jaydfox Wrote: If you will recall, terms calculated with the FMA will initially seem to converge,

If I knew what FMA means perhaps I would recall Wink
Reply
#8
(08/11/2009, 08:03 PM)jaydfox Wrote: Unfortunately, this means my observation about the apparent radius of convergence was correct. It wasn't a fluke of the calculations. I'll try to identify a potential singularity, but assuming I can't (only 128 terms, after all), I'll try to get more terms.
It seems that 128 terms is enough to at least get a rough idea of where the singularities are (a conjugate pair). Here are graphs I computed, using z with a fixed magnitude and varying the argument from -pi to pi (one full circle of fixed radius in the complex plane).

Note that I added each new graph to the previous, so you can see that it's fairly stable around most of the circle.

First up is sexp(z), using the first 128 terms, where the magnitude of z is 0.455, and the argument is varied through -pi to pi:
   

Next up is the same graph, plus the graph of sexp(z), with mag(z) equal to 0.460:
   

Next I add the graph with with mag(z) equal to 0.465. Note that it starts to get a little "bumpier":
   

With this next one, the bumpiness is much more obvious:
   

Finally, the bumpiness turns into near chaos:

   

It just get worse after that, but all over the place, so it's not insightful to go much further than 0.475. With more terms to the power series, hopefully the picture will be a little clearer.
~ Jay Daniel Fox
Reply
#9
I'm attaching SAGE variables, for those who use SAGE. The zip file contains two sobj files, which are SAGE objects. I'm also attaching a text file with the coefficients, one per line, for those who don't have SAGE. (Note: I'm using SAGE 4.0.1, a slightly outdated version.)

First, the file jay_sexp_vec_128terms_1024bits.sobj contains the coefficients in a vector(RealField(1024), 129). These are the coefficients of the Taylor series, not the derivatives directly (i.e., the factorial is already factored in).

The file jay_sexp_pol_128terms_1024bits.sobj is a polynomial over the ring PolynomialRing(ComplexField(1024), 'z'), with 129 terms (128 not including the constant). To use it, simply load it into a variable, e.g., sexp, then use it like a function:

Code:
sexp = load('jay_sexp_pol_128terms_1024bits.sobj');
APC = ComplexField(1024);
print sexp(APC(0.2, 0.1));

To convert the vector into a polynomial, you could do the following (there's probably an easier way, but the documentation is a bit sparse at times):

Code:
svec = load('jay_sexp_vec_128terms_1024bits.sobj');
APC = ComplexField(1024);
APCPol = PolynomialRing(APC, 'z');
z = APCPol('z');
sexp = sum(s[kk] * z**kk for kk in xrange(len(svec)));

The text file jay_sexp_coeffs_128terms_256bits.txt contains the coefficients in a text format, one per line. I truncated it to 256 bits, so that the decimal expansions would be a reasonable length.


Attached Files
.zip   jay_sexp_128terms_1024bits.zip (Size: 37.71 KB / Downloads: 255)
.txt   jay_sexp_coeffs_128terms_256bits.txt (Size: 10.31 KB / Downloads: 287)
~ Jay Daniel Fox
Reply
#10
(08/11/2009, 09:12 PM)jaydfox Wrote: .....
First up is sexp(z), using the first 128 terms, where the magnitude of z is 0.455, and the argument is varied through -pi to pi:
Is this sexp_e(z) base change formula from base sexp_eta (cheta), or sexp(z) from Andrew's matrix? Or sexp(z) base eta??

Earlier in this post you wrote "As has been discussed elsewhere, my change of base formula allows one to find fractional iterations of exponentiation for real bases greater than eta, provided one sticks to the reals. Well, in order to find complex values...."

That seems to indicate this is sexp_e(z) computed from sexp_eta via base change. Assuming that this is sexp_e(z) in the complex plane, how large a value of "k" are you using for the iterated exponent/logarithms?

Oh, I get it, you're using the well defined function base change function at the real axis only, for sexp_e(z). And then your using those real values to generate the taylor series???? And graphing that taylor series in the complex plane?
- Shel
Reply




Users browsing this thread: 1 Guest(s)