# Tetration Forum

Full Version: base holomorphic tetration
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4
actually I think that the regular tetration at the lower fixed point is not continuable to bases $>e^{1/e}$.

Actually I think the Shell-Thron region is the domain of holomorphy, every point on the boundary is singular. This became clear to me some days ago:

1. The regular iteration depends on the derivative of $\exp_b$ at the fixed point.
2. If $a$ is the fixed point then the derivation is $\log(a)$
3. The fixed point is given by $a = \exp(-W(-\log(b)))$, or $\log(a)=-W(-\log(b))$.
4. The problematic value of the derivation at the fixed point is $|\log(a)|=1$. In this case $\exp_b^{\circ t}$ is not analytic at $a$. I guess this implies that $b\mapsto\exp_b^{\circ t}(1)$ is not analytic at $b$.
5. $|\log(a)|=1$ is exactly at those $b$ in the image of the unit circle under of the inverse of $b\mapsto -W(-\log(b))$, i.e. under $z\mapsto\exp(z \exp(-z))$ which is just the boundary of the Shell-Thron region.
Interesting, that it is the natural boundary (it would remind me of the situation with the Jacobi theta functions in the nome $q$: the (unit, I think) circle is the natural boundary, delimited by a dense set of singularities.). This would suggest that the STR and the region outside are two totally different domains, and there may not be a "true" tetrational that solves them both, or, if there is, it would not agree with the regular iteration. Which makes me wonder. There's this graph:

http://math.eretrandre.org/tetrationforu...926#pid926

which shows the possible region of convergence of Robbins' tetration, overlapping the STR a little. Could one test there and calculate it out really really really far and try to see if it does indeed disagree with the value of the regular iteration as would be expected if this hypothesis was correct?

Then again, the hypothesis could also be incorrect. I'm a little dubious about point #4: how does $\exp^{t}_b(w)$ being non-analytic at w = a imply it non-analytic in *b* when w = *1*? Can you do a graph of the derivative $\frac{\partial}{\partial b} \exp^{t}_b(1)$ for the regular iteration, in the same way that you did the graph of the regular iteration you posted here earlier?
I also noticed something in that paper where you describe the $g$-coefficients for the regular iteration. You have the iterating formula

$g_n = \frac{1}{{f_1}^n - f_1} \left(f_n {g_1}^n - {g_1}{f_n} + \sum_{m=2}^{n-1} f_m {g^m}_n - g_m {f^m}_n\right)$.

where $g = reg$
But at n = 2, we have a sum from "2 to 1". How do you do that? Is it equal to the sum from 2 to 2, minus the value of the summand evaluated at 2 (like how the sum from 2 to 3 is that from 2 to 4 minus the value of the summand at 4)? Wouldn't that just be 0? If I assume so, then I get

$g_2 = \frac{\log(a)^2 \left(\log(a)^t\right)^2 - \log(a)^t \log(a)^2}{\log(a)^2 - \log(a)}$

which disagrees with what you gave in your earlier post here.

bo198214: closed one tex tag to make the post readible
(11/12/2009, 07:08 PM)mike3 Wrote: [ -> ]Then again, the hypothesis could also be incorrect. I'm a little dubious about point #4: how does $\exp^{t}_b(w)$ being non-analytic at w = a imply it non-analytic in *b* when w = *1*?

Ya indeed, perhaps I get some stronger arguments later.
For illustration: I was considering the limit formula. Only if b is inside the STR the fixed point is attracting and hence reachable from 1 by repeated iteration.
If the base crossed the boundary one has to apply the limit formula for the inverse, and on the boundary the functions by approximating the fixed point from inside and from outside are different (no continuations of each other).

Quote: Can you do a graph of the derivative $\frac{\partial}{\partial b} \exp^{t}_b(1)$ for the regular iteration, in the same way that you did the graph of the regular iteration you posted here earlier?

On the real axis you will not see anything special about the derivative I guess.

(11/12/2009, 07:37 PM)mike3 Wrote: [ -> ]I also noticed something in that paper where you describe the $g$-coefficients for the regular iteration. You have the iterating formula

$g_n = \frac{1}{{f_1}^n - f_1} \left(f_n {g_1}^n - {g_1}{f_n} + \sum_{m=2}^{n-1} f_m {g^m}_n - g_m {f^m}_n\right)$.

where $g = reg$
But at n = 2, we have a sum from "2 to 1". How do you do that? Is it equal to the sum from 2 to 2, minus the value of the summand evaluated at 2 (like how the sum from 2 to 3 is that from 2 to 4 minus the value of the summand at 4)? Wouldn't that just be 0? If I assume so, then I get

yes, 0 is just the classical sense.

Quote:$g_2 = \frac{\log(a)^2 \left(\log(a)^t\right)^2 - \log(a)^t \log(a)^2}{\log(a)^2 - \log(a)}$

which disagrees with what you gave in your earlier post here.

No, it agrees. $f_2 = \log(a)/2$.
1. The limit formula will simply fail to converge when the fixpoint is no longer attracting, no? (unless you reverse it, but then you get a different function so this cannot be interpreted as continuation of the formula) If so, that just marks the region of convergence of that limit formula, it does not imply that it is a natural boundary, just as the failure of the convergence of a Taylor series beyond its radius of convergence does not imply that such radius is a natural boundary.

2. If $e^{1/e}$ is a singularity, as you seem to suspect, the derivative should explode at that point -- consider $\frac{d}{dx} \sqrt{x}$ as $x \rightarrow 0^{+}$. Though I just realized, it need not be the first derivative, but a higher one may do it. It can't be a smooth point. (Though you have to watch out (!) and not be misled by numerical or convergence error (is that a type of numerical error?) esp. considering slowdown of convergence as $e^{1/e}$ is approached, and also the rounding errors of numerical differentiation if you're differentiating it numerically.)

3. Wouldn't it be $\frac{\log(a)^2}{2}$? Or do you mean the series for f expanded about the fixed point?
Not sure if this will help anything, but here's a graph showing the root test for the first 33 Taylor coefficients in b for $^{1/2} b$ expanded about b = 1.42. I used the regular formula

$^{t} b = \lim_{n \rightarrow \infty} \log_b^n\left(F - \left(F - \exp^n_b(1)\right) \log(F)^t\right)$

(this is just a rearrangement of the formula mentioned in my original post to this thread)

with n = 8192, 2048 digits of precision (just to be safe) and the simple, straightforward numerical differentiation via the difference quotient (delta step 10^-20). Took about an hour(!) to compute due to the extreme precision... I could probably do some more if I could parallelize the job but Pari/GP doesn't have parallel computation facilities (no threads, no MPI, no nothing like that). I suppose I could recode the formula as a C or Fortran program for a parallel job, interfaced with an arbitrary-precision package like GMP or MPFR (I think Pari/GP uses something like this under the hood), and crunch even more coeffs, because crunching these coeffs is tediously slow, esp. if I have to add more precision (I'm pretty sure up to the 30th or 31st coeff is right, not sure about the accuracy of the last 2 but they should be, though only a recalc with more precision will tell.).

Coeffs:
Code:
a_0 = 1.246220033102832033637391168 a_1 = 0.4479211001478450057502788291 a_2 = -0.1944285662380565757140341278 a_3 = 0.1431678738611080328964523214 a_4 = -0.1449673997741212873243507213 a_5 = 0.1821593012238922403824879947 a_6 = -0.2634074264259034150890239199 a_7 = 0.4170984777615359795375604795 a_8 = -0.7022041478879047165932451401 a_9 = 1.235350783631857820858433698 a_10 = -2.247058971388908630541707138 a_11 = 4.196782696009382012696224987 a_12 = -8.009130240912803756451562486 a_13 = 15.56210572774662623695298001 a_14 = -30.70293022611311616071677783 a_15 = 61.37465434532134439666317750 a_16 = -124.0937577676786262630256434 a_17 = 253.4276217341761588591503345 a_18 = -522.1524296306182164337119282 a_19 = 1084.317735417423446029022288 a_20 = -2267.611476993890273952854904 a_21 = 4772.099072723291344681868206 a_22 = -10098.44557704069780712228148 a_23 = 21465.93884981489863800142138 a_24 = -45708.11493010257551050715076 a_25 = 96243.61297567708695783556454 a_26 = -185475.2857026224205407755534 a_27 = 186483.8146778817735118583707 a_28 = -451252.6193153016119357011782 a_29 = 73309968.03777933438145964027 a_30 = -1983477241.135794800710911594 a_31 = 19226263007.10400347804764423 a_32 = 346411096888.4104287944798334

Graph of root test (testing for convergence radius, as you mention):

[attachment=632]

The big question, of course, is does it dip below 0.42, the expected radius of convergence, and stay below it, out to infinity? If it does, then that would indicate a closer singularity. If it drops to 0.02466... then that would indicate a singularity at $b = e^{1/e}$. (Of course, the graph alone isn't a proof.)
(rmv double post)
(11/13/2009, 02:22 AM)mike3 Wrote: [ -> ]1. The limit formula will simply fail to converge when the fixpoint is no longer attracting, no? (unless you reverse it, but then you get a different function so this cannot be interpreted as continuation of the formula) If so, that just marks the region of convergence of that limit formula, it does not imply that it is a natural boundary, just as the failure of the convergence of a Taylor series beyond its radius of convergence does not imply that such radius is a natural boundary.

yes, yes, I know my argumentation is very weak. Its more a feeling. Lets see whether sometime I can make this more precise, or whether it just turns out to be misleading.

Quote:2. If $e^{1/e}$ is a singularity, as you seem to suspect, the derivative should explode at that point -- consider $\frac{d}{dx} \sqrt{x}$ as $x \rightarrow 0^{+}$. Though I just realized, it need not be the first derivative, but a higher one may do it. It can't be a smooth point.

I dont think so, if all derivatives from one approaching sector exists the function can still be non-analytic at that point. In this case one says f has an asymptotic powerseries development. This is particularly true for the non-integer iteration of $e^x-1$. You can derive a powerseries at 0 and you have an (or rather two) analytic function(s) whose derivatives are compatible with the powerseries development at 0 when approaching 0 from one direction. But the function itself is not analytic, because the two analytic functions defined in opposite sectors are not continuations of each other.

Quote:3. Wouldn't it be $\frac{\log(a)^2}{2}$? Or do you mean the series for f expanded about the fixed point?

Mike, I explained in my previous posts with what functions I work. The function $f$ in the powerseries formula (which is $g$ in my original post) is $\ln(a)(e^x-1)$ where $a$ is the lower fixed point.
This is a linear conjugation of $b^x$ which moves the lower fixed point to 0.
Hmm.

Oh yeah, about the thing with the "g": I was thinking the function was $b^x$ not $\log(a) (e^x - 1)$, although now that I read it again I see it now. I guess the inconsistent notation with the paper was throwing me off. So then would I be right in interpreting "$g$" in the posts as what is called "$f$" in the paper, and what is called "$g^{\circ t}$" in the posts as what is called "$g$" in the paper? (i.e. I'm trying to figure out how to map the notation from your posts to that of the paper so I can get the recursive-generating formula going)
(11/13/2009, 10:33 AM)mike3 Wrote: [ -> ]So then would I be right in interpreting "$g$" in the posts as what is called "$f$" in the paper, and what is called "$g^{\circ t}$" in the posts as what is called "$g$" in the paper?

Exactly.
Pages: 1 2 3 4