(09/23/2014, 11:01 PM)jaydfox Wrote: By the way, you mentioned that there is a large range of convergence. When I first analyzed the Taylor series, it seemed to have a radius of convergence of about 2.5-2.6, give or take. Keep in mind we're working in units of sqrt(log(log(z))+1), which corresponds to a radius of about 6.0-6.5 in units of log(log(z))+1. That number is really close to 2*pi, so I was initially tempted to assume that it had something to do with a singularity in a nearby branch.

However, when I looked at where the values of the function "explode", it was near large negative values of log(log(z))+1, which corresponds to bases just above 1, e.g., 1.0001.

Well, when I thought about it, this actually makes sense. For bases very close to 1, there will be a fixed point very close to the base, and another fixed point (on the other side of e) that is very large. By very large, we could be talking a thousand, a million, a billion, etc. This is what is constricting the radius. The radius of convergence should be infinite (I think), but the coefficients in the Taylor series decay so slowly that it behaves like a singularity when you approach base 1.

I've only calculated the first 500 or so coefficients of the Taylor series for analysis, but I've found a method to roughly estimate the rate at which the coefficients decay, and it's very slow. In fact, the "effective radius" seems to climb up to about 2.6 before actually coming back down to 2.5 or less, and then eventually, slowly, climbing back up.

Okay, so I've calculated the Taylor series out to 1600 terms. It took almost two full days. The bulk of the time (over 99%) was spent on the series reversion. I'm using SAGE/python to do the series reversion, but I checked the source code, and it's using the PARI library to do reversion. So the SAGE code and the PARI code should take just about as much time, for a given level of precision. And as I mentioned before, the minimum precision during the reversion step is dictated by the number of terms in the series.

Anyway, I should have trusted my gut about the 2*pi thing. The root test is still limited to about 2.51, or really close to sqrt(2*pi). I did an analysis of where the function "blows up", and found something interesting. In terms of absolute value, the function blows up in the direction of theta=pi/2, e.g., sqrt(2*pi)*I = sqrt(-2*pi). However, there is no obvious singularity in that direction.

Instead of looking at where the absolute value is the largest, I looked for anything strange, and found it at odd multiples of pi/4. So at theta=pi*(2*k+1)/4, e.g., sqrt(2*pi)*(sqrt(1/2)+sqrt(1/2)*I), there is a singularity. It's not an "infinite" singularity. Instead, it turns out to be the location of base eta, in other branches. Once I realized this, it was like, duh! Why didn't I think of that! You see, the value is well-defined at base eta, unlike say, base 0 or 1. But it's an algebraic singularity, because there are pairs of fixed points that converge when we get to that base.

Remember, we're working with sqrt(ln(ln(base))+1). Well, consider base eta. The first logarithm is 1/e. The second iterated logarithm is -1. Add 2*pi*I and then add 1, and now you have sqrt(2*pi*I). We could just as easily have added -2*pi*I, and the sqrt has two branches, so this gives us all four possibilities for the sqrt: sqrt(2*pi)*sqrt(1/2)*(+/-1 +/- I) = sqrt(pi) * (+/-1 +/i I)

This means that, without analytically continuing the function, we will be limited to a radius of convergence of about 2.5. For reference, sqrt(pi) is about 2.506628... Edit:Oops, I forgot the factor of 2: sqrt(2*pi) = 2.506628... End Edit

Now, I haven't put much thought yet into how to analytically extend the function. However, just to get something to work with, we could calculate a bunch of fixed points for a list of bases, and then use that information to come up with an approximate Taylor series. This would boil down to either polynomial interpolation or least squares regression. You can do both at the same time if you use a discrete fourier transform, or DFT (use a list of equally spaces points on the circumference of a circle). I believe this is what Sheldon refers to as a Cauchy integral. It has the benefit of being a polynomial interpolation and a least squares fit, which is why it's so useful.

Another benefit of a DFT (compared to, say, Lagrange or Newton interpolation), is that it uses considerably less memory, and it's quite a bit faster if you use an FFT (Fast Fourier Transform). I'll see what I can put together over the next few days.

~ Jay Daniel Fox