09/23/2014, 11:01 PM

(02/25/2012, 06:35 AM)sheldonison Wrote: (...) I needed a way to figure out both primary fixed points for a generic complex tetration base, and then to figure out if the base was inside or outside the Shell Thron region, which tells whether or not one of the fixed points is attracting. The formula to generate the base from the fixed point, is pretty well known, but what about the function's inverse, if you don't have access to the Lambert W function? What if you want to get the fixed point from the base, using something other than brute force? For this Taylor series formula, for a base b, z is the input to the Taylor series.

. In this formula, z=0 corresponds to base . Then the two primary fixed points are calculated by putting both the positive and negative square roots in the taylor series, which is accurate to >32 decimal digits for 1.05<=b<=25 or so, and gives the equation for the two fixed points.

Since I don't have access to the Lambert W function, (it is not a part of pari-gp), I think this Taylor series is very helpful. This Taylor series equation was inspired by the observation that you need to loop around , twice to return to your starting point, where the fixed point is slowly modified as you loop around . The log(log(b)) part of the equation was to maximize the radius of convergence, since there is a singularity the fixed point for b=1. The log(log(b)) moves the singularity at b=1 to infinity, so that helps this taylor series converge over a very large range of bases, and probably for any real base>1. I haven't had time to figure out the total range of convergence yet, but it is surprisingly large, and includes negative bases. For any base b, with abs(b)<4000, except in the immediate vicinity of b=1, and b=0, the Taylor series gives double precision accurate results for the fixed point.

- Sheldon

Hey Sheldon, I see we found the same Taylor series, though I couldn't tell what algorithm you used to derive it. I have an iterative fixed point finder with quadratic convergence, which I posted perhaps 6-7 years ago. I didn't have a good method of "seeding" the iterative solver, so about 4-6 months ago I decided to come up with one. My initial plan was just to do a polynomial regression to give me decent estimates, but as I was playing around with it, I noticed the same thing you did, that near base eta, the fixed points follow a sort of sqrt-ish behavior. I derived a method to calculate the Taylor series to arbitrary precision, which I initially coded in SAGE/python.

Then, after doing all that work, I found this thread.

Anyway, after that, I was working on making improvements to your kneser.gp / tetcomplex.gp code (which I'll get around to posting some day), so I ported the code to PARI/gp. In the process, I checked your coefficients, and the accuracy degrades pretty quickly. By the last few coefficients, they're not accurate to more than a few decimal places.

I've been meaning to post it here for a few months, but I got busy with other projects (most recently, the binary partition sequence). MorgothV8's recent thread on fixed points prompted me to finally post my fixed point code. If you're interested, the code is posted in that thread:

http://math.eretrandre.org/tetrationforu...60#pid7460

By the way, you mentioned that there is a large range of convergence. When I first analyzed the Taylor series, it seemed to have a radius of convergence of about 2.5-2.6, give or take. Keep in mind we're working in units of sqrt(log(log(z))+1), which corresponds to a radius of about 6.0-6.5 in units of log(log(z))+1. That number is really close to 2*pi, so I was initially tempted to assume that it had something to do with a singularity in a nearby branch.

However, when I looked at where the values of the function "explode", it was near large negative values of log(log(z))+1, which corresponds to bases just above 1, e.g., 1.0001.

Well, when I thought about it, this actually makes sense. For bases very close to 1, there will be a fixed point very close to the base, and another fixed point (on the other side of e) that is very large. By very large, we could be talking a thousand, a million, a billion, etc. This is what is constricting the radius. The radius of convergence should be infinite (I think), but the coefficients in the Taylor series decay so slowly that it behaves like a singularity when you approach base 1.

I've only calculated the first 500 or so coefficients of the Taylor series for analysis, but I've found a method to roughly estimate the rate at which the coefficients decay, and it's very slow. In fact, the "effective radius" seems to climb up to about 2.6 before actually coming back down to 2.5 or less, and then eventually, slowly, climbing back up.

Consider that for a base just slightly larger than 1, the upper fixed point will be very large.

For example, for a base with an upper fixed point of 2^12 (4096), the value of abs(sqrt(ln(ln(z))+1)) is about 2.28. For 2^20 (1048576), the value of abs(sqrt(ln(ln(z))+1)) is about 3.20. Think about how many terms it would take, at a radius of 3.20, to be able to encode a value of about a million. A hundred thousand at least, I would think. So even by a hundred thousand terms, the function will behave like it has a radius of convergence of 3.5 or less. At least, that's how I perceive it, intuitively. My complex analysis isn't sharp enough to do a formal analysis.

Anyway, the good thing is, on the top side, a radius of convergence of 2.5 implies that we can find fixed points for bases up to about exp(exp(2.5^2-1)), which is about 10^80 (rounding conservatively). That's provided we use enough terms in the Taylor series. If we conservatively stay within a radius of 2, that still gives us a range up to exp(exp(2^2-1)) = e^e^3 = 5.3 x 10^8, down to about 1.00676.

I haven't tried calculating anything that large yet, so I don't know how many terms are needed. I've played around with the small numbers, and they don't seem to blow up until about 1.001 or lower.

As for my algorithm, I thought I had a pretty TeX-ified description of it somewhere, but I can't find it. I at least had the presence of mind to put comments in my SAGE/python code (4-6 months ago), so I'll post those until I have time to TeX-ify it. This is just the comments: I started modifying the (SAGE/python) code, separate from the comments, so I still need to merge the code and comments back together.

Code:

`# The base is developed as a function of the fixed point (fp).`

# base = B(fp) = fp^(1/fp)

# ln(ln(base)) + 1 = ln(ln(fp^(1/fp))) + 1

# = ln(ln(fp)) - ln(fp) + 1

# Next we calculate the derivatives at fp=e to develop a power

# series in fp. Given fp=e, then ln(ln(fp)) - ln(fp) + 1 = 0.

# The first derivative is: 1/(fp*ln(fp)) - 1/fp = 1/e - 1/e = 0.

# Therefore, the first non-zero term is the second order term.

# Since the first non-zero term is the second order term, we will

# take the square root of the power series.

# sqrt(ln(ln(base)) + 1) = sqrt(ln(ln(fp)) - ln(fp) + 1)

# Reversion of the RH series gives us the fixed point as a function of

# the base. However, note that the left-hand side was sqrt(ln(ln(base)) + 1)

# Finally, recall that we developed the original power series at fp=e,

# so add e back to the final answer.

One final thought. I did an analysis, and I found that precision decays at about 3 bits per term in the Taylor series. Therefore, to calculate 200 terms to 150 bits of precision, you need to start with about 150+3*200, or 750 bits, plus a little more just to be safe. This turns out to be easy in PARI/gp, because precision is measured in decimal digits. Each decimal digit of precision is about 3.3 bits. So I just added an extra digit of precision for each term.

~ Jay Daniel Fox