There is another thread that seems to focus on pentation, which is Jay's discussion, but I thought I'd talk about infinite pentation, which is a pentation with infinite hyper-5-exponent (or height like hyper-4-exponents?) or the limit of iterated tetrational functions.

Since we know quite a bit about tetration, I think its time we tackle understanding this function. But since tetration is still not an exact science (more of an art), it would be nice if we could only consider one tetration (as opposed to an infinite number of them), so we can write this as the solution x to the relation

just like you can with the previous (infinite tetration). Now at this point, it is obvious that we can solve for a using the super-root, which gives

which means x-superroot-x is the inverse function of infinite pentation. Infinite pentation grows so quickly that it would probably prevent direct interpolation, so I will only consider x-superroot-x in the remainder of this post. We know the analytic properties of because of the Lambert W function, but we also know how to calculate all integer super-roots numerically, by solving for a to a given accuracy/precision. So these points are known to be exact in the sense that we can find 500 digits in a few seconds. I have attached several hundred of these points below.

If we interpolate these points, one would except the interpolation to diverge for infinite pentation, and x-superroot-x, but the interpolating polynomial of these integer points do seem to converge, however strange. It is strange that these polynomials converge, because there must be a singularity at 0 and -1 because which implies = indeterminate and = undefined for all other z. So even though a power series about x=1 would have a radius of convergence of at most 1, adding points to the interpolating function outside of the radius of convergence seems to improve the approximation.

I have attached a graph below. The dotted line indicates that it seems that

which I seem to remember reading somewhere... and the maximum is not at e (as it is with x-root-x), but at the point , . This has also been mentioned before, in nuninho1980's comment. So I tend to think that it can't be a mistake. Nothing in this method requires either regular or natural iteration, since I used only Lagrange interpolation on the known integer points. The points in the attachment are all of the form {x, a} where all of the x are positive integers. So the interpolation gathered with this method only applies for (x > 0.5) or so, since the closer x gets to zero, the singularities mess things up. It seems both regular and natural iteration would be useful in filling out the region where (-2 < x < -1) because this is where the common real bases (of tetrational functions) usually have a real fixed-point.

Andrew Robbins

Since we know quite a bit about tetration, I think its time we tackle understanding this function. But since tetration is still not an exact science (more of an art), it would be nice if we could only consider one tetration (as opposed to an infinite number of them), so we can write this as the solution x to the relation

just like you can with the previous (infinite tetration). Now at this point, it is obvious that we can solve for a using the super-root, which gives

which means x-superroot-x is the inverse function of infinite pentation. Infinite pentation grows so quickly that it would probably prevent direct interpolation, so I will only consider x-superroot-x in the remainder of this post. We know the analytic properties of because of the Lambert W function, but we also know how to calculate all integer super-roots numerically, by solving for a to a given accuracy/precision. So these points are known to be exact in the sense that we can find 500 digits in a few seconds. I have attached several hundred of these points below.

If we interpolate these points, one would except the interpolation to diverge for infinite pentation, and x-superroot-x, but the interpolating polynomial of these integer points do seem to converge, however strange. It is strange that these polynomials converge, because there must be a singularity at 0 and -1 because which implies = indeterminate and = undefined for all other z. So even though a power series about x=1 would have a radius of convergence of at most 1, adding points to the interpolating function outside of the radius of convergence seems to improve the approximation.

I have attached a graph below. The dotted line indicates that it seems that

which I seem to remember reading somewhere... and the maximum is not at e (as it is with x-root-x), but at the point , . This has also been mentioned before, in nuninho1980's comment. So I tend to think that it can't be a mistake. Nothing in this method requires either regular or natural iteration, since I used only Lagrange interpolation on the known integer points. The points in the attachment are all of the form {x, a} where all of the x are positive integers. So the interpolation gathered with this method only applies for (x > 0.5) or so, since the closer x gets to zero, the singularities mess things up. It seems both regular and natural iteration would be useful in filling out the region where (-2 < x < -1) because this is where the common real bases (of tetrational functions) usually have a real fixed-point.

Andrew Robbins