03/10/2012, 06:59 AM (This post was last modified: 03/10/2012, 07:00 AM by mike3.)
Neat stuff. I'm curious: how did you get the method to work? That is, when we take the Abel function of the Taylor approximation at z = 0 for the Fourier integral to get the warping map (the \( \theta(z) \) mapping), how do you make sure that the values going into the Abel function are within its range of convergence? That seems to be the trick part that makes it difficult to extend the method to various other complex bases.
03/10/2012, 08:53 PM (This post was last modified: 03/10/2012, 08:53 PM by sheldonison.)
(03/10/2012, 06:59 AM)mike3 Wrote: Neat stuff. I'm curious: how did you get the method to work? That is, when we take the Abel function of the Taylor approximation at z = 0 for the Fourier integral to get the warping map (the \( \theta(z) \) mapping), how do you make sure that the values going into the Abel function are within its range of convergence? That seems to be the trick part that makes it difficult to extend the method to various other complex bases.
Hey Mike,
Thanks for commenting. As far as the Abel function, goes, I do that evaluation over a unit length from -0.5 to 0.5, and I extended the imaginary delta to 0.175i, for the upper superfunction theta, and -0.175i for the lower superfunction eta. This helps remove ambiguity on "which logarithm branch" to use, and I also have some code tweaks to make sure the inverse Superfunction (or Abel function) is mapping to the same period for all sample points. The other problem is deciding how accurate the Schroder function needs to be. The program iterates logarithms (for the repelling case), before evaluating the Schroder function, until the sample point is within some bounds of L, where the Schroder function is accurate. One difficulty is how to know what to use as the bounds, and I'm still tweaking that, with adjustments for bases near eta, and will put an update of the code online shortly. Finally, I initialize and renormalize the superfunction so that B^sexp(-0.5+/-0.175i)=sexp(0.5+/-0.175i), which at least gives continuity to the theta calculations. This dramatically improves the numer of bits precision improvement I get at each iteration.
Right now, I'm numerically investigate the branchpoint singularity at eta, which is incredibly mild, as you and others have noticed, and I will post some surprising results about that.
- Sheldon
03/11/2012, 12:27 AM (This post was last modified: 03/11/2012, 10:07 PM by mike3.)
(03/10/2012, 08:53 PM)sheldonison Wrote:
(03/10/2012, 06:59 AM)mike3 Wrote: Neat stuff. I'm curious: how did you get the method to work? That is, when we take the Abel function of the Taylor approximation at z = 0 for the Fourier integral to get the warping map (the \( \theta(z) \) mapping), how do you make sure that the values going into the Abel function are within its range of convergence? That seems to be the trick part that makes it difficult to extend the method to various other complex bases.
Hey Mike,
Thanks for commenting. As far as the Abel function, goes, I do that evaluation over a unit length from -0.5 to 0.5, and I extended the imaginary delta to 0.175i, for the upper superfunction theta, and -0.175i for the lower superfunction eta. This helps remove ambiguity on "which logarithm branch" to use, and I also have some code tweaks to make sure the inverse Superfunction (or Abel function) is mapping to the same period for all sample points. The other problem is deciding how accurate the Schroder function needs to be. The program iterates logarithms (for the repelling case), before evaluating the Schroder function, until the sample point is within some bounds of L, where the Schroder function is accurate. One difficulty is how to know what to use as the bounds, and I'm still tweaking that, with adjustments for bases near eta, and will put an update of the code online shortly. Finally, I initialize and renormalize the superfunction so that B^sexp(-0.5+/-0.175i)=sexp(0.5+/-0.175i), which at least gives continuity to the theta calculations. This dramatically improves the numer of bits precision improvement I get at each iteration.
What do you mean by "mapping to the same period"?
(03/10/2012, 08:53 PM)sheldonison Wrote: Right now, I'm numerically investigate the branchpoint singularity at eta, which is incredibly mild, as you and others have noticed, and I will post some surprising results about that.
- Sheldon
Thanks for commenting. As far as the Abel function, goes, I do that evaluation over a unit length from -0.5 to 0.5, and I extended the imaginary delta to 0.175i, for the upper superfunction theta, and -0.175i for the lower superfunction eta. This helps remove ambiguity on "which logarithm branch" to use, and I also have some code tweaks to make sure the inverse Superfunction (or Abel function) is mapping to the same period for all sample points.....
What do you mean by "mapping to the same period"?
What I was refering to was the last step in evaluating the inverse superfunction (or Abel function) to generate theta, after evaluating the Schroder function, and adjusting for iterated logarithms. This last step is:
\( \log(\text{Schroder}(z-L))/\log(L\times\log(\text{base})) \)
This logairthm of the Schroder function can be ambiguous, especially if fixup is required due to the accuracy range of the Schroder function. And the easiest fix is to compare adjacent points for the inverse superfunction, and to add or subtract the period so that the adjacent points are near each other.
Quote:
(03/10/2012, 08:53 PM)sheldonison Wrote: Right now, I'm numerically investigate the branchpoint singularity at eta, which is incredibly mild, as you and others have noticed, and I will post some surprising results about that.
- Sheldon
What did you find?
I'm still investigating and learning, and will post more details later. The experiment, is to develop taylor series for all of the coefficients of sexp_b(z), developed around complex sexp(z) in the neighborhood of base=2. The conundrum, which I'm beginning to be able to explain, is that the numerical results are way too good, and its hard to see the effects of the branch point at eta. In fact, numerically convergence seems limited by the more distant singularity for sexp_b(z), at b=1, and not at all by the closer singularity at eta. So far, I have a taylor series for all the coefficients for the neighborhood of base=2, and the resulting sexp(z) is accurate to 33 decimal digits. At eta, it is accurate to 31 decimal digits. Theory says the extrapolated sexp_b(z) function shouldn't even converge if the radius is outside 2-eta, but for the truncated series, that is not at all the case. For example, results for base e are still accurate to 19 decimal digits, and for base 1.3, the results are accurate to about 20 decimal digits. Precision declines nearly linearly from the sample radius (2-eta)=~0.555, towards the more distant singularity at b=1, with radius=1. More details later. I also put a new version of the code in the first post, with improvements near eta, and convergence for ultra high precision results for >p 67.
- Sheldon
03/16/2012, 07:42 PM (This post was last modified: 03/19/2012, 10:52 PM by sheldonison.)
Quote:... The experiment, is to develop taylor series for all of the coefficients of sexp_b(z), developed around complex sexp(z) in the neighborhood of base=2. The conundrum, which I'm beginning to be able to explain, is that the numerical results are way too good, and its hard to see the effects of the branch point at eta. In fact, numerically convergence seems limited by the more distant singularity for sexp_b(z), at b=1, and not at all by the closer singularity at eta....
I have a good way of explaining why the branch point at \( \eta=\exp(1/e) \) is so slight. The graph below is for base \( \eta-0.25\approx1.195 \) Start with this formula for the merged sexp(z), after rotating counterclockwise around eta. Of course, there is a corresponding formula in terms of the lower repelling superfunction too, but the upper attracting superfunction is of primary interest here. Note that the attracting superfunction is real valued at the real axis, however, the theta(z) means that the sexp(z) has a small negative imaginary component at the real axis, for the counterclockwise sexp(z), and a small positive imaginary component for the clockwise sexp(z).
\( \text{sexp}_{+\pi}(z)=\text{superf_u(z+\theta_u(z)) \) (counterclockwise)
\( \text{sexp}_{-\pi}(z)=\text{superf_u(z+\overline{\theta_u(z)}) \) (clockwise)
Here, the graph on the left is for superf from the attracting fixed point, and the two graphs in the middle show the merged sexp(z) from both fixed points, rotating counter clockwise around eta, vs rotating clockwise around eta. The sexp(z) function looks much more like the attracting fixed point superfunction, than the repelling fixed point superfunction on the right. In fact, for \( b=\eta-0.25 \) at the real axis, \( \theta_u(z) \) is an analytic function, dominated almost entirely by the first overtone, and very nearly equal to zero, with a peak negative imaginary component of about 7.06*10^-13. The graph below shows the 1-cyclic \( \theta_u(z) \) function at the real axis, with its negative imaginary component in magenta, where the attracting superfunction(0) has been centered so that superfunction(0)=1.
\( \theta_u(z) \) is very different than \( \theta_l(z) \) which is for the repelling superfunction, which has a singularity at the real axis. So the clockwise versus counterclockwise functions are both nearly identical at the real axis, since both are very nearly identical to the attracting superfunction, depending on how small \( \theta_u(z) \) is. First of all, \( \theta_u(z) \) by definition converges to a constant as z goes to imag(infinity).
\( \theta_u(z)=\sum_{n=0}^{\infty}t_n\times\exp(z\times 2n\pi i) \)
It seems logical to assume that somewhere between the Period of the attracting superfunction, and the period of the repelling superfunction, \( \theta_u(z) \) will have its singularity, and this matches the computational results. Also, the period of the attracting and repelling superfunctions are both functions of L. \( \text{period}=2\pi i / \log(\log(L)) \). In the neighbordhood of \( \eta \), L(b) itself is an analytic function (real valued for b<eta) of \( \sqrt{\eta-b} \), and approaches e at eta itself, so the log(log(L)) will also be analytic function of \( \sqrt{\eta-b} \) and will approach zero. So the period approaches infinity, in the neighborhood of eta. And since \( \theta_u(z)\approx \exp(-2\pi |\text{period}|) \), then theta_u is very small. The imaginary component of theta hits its minimum at -0.5. Emperically, the following approximation holds for the difference between the clockwise and counterclockwise at sexp(z=-0.5).
\( \text{sexp}_{+\pi}(-0.5)-\text{sexp}_{-\pi}(-0.5)\approx 2 \times theta_u(z)\approx (1/3)\times\exp(-2\pi |\text{period}|) \).
So to the left of the branchpoint, the clockwise and counterclockwise functions both approach very closely to attracting superfunction, and hence very close to each other, with the difference between them being proportional to theta, and as the base approaches eta from the left, the branchpoint becomes very slight.
This delta corresponds to the difference in values between the clockwise sexp(z) and the counterclockwise sexp(z), which limits how accurate the truncated taylor series for the derivatives can be.
Finally, here is the best data I have for the taylor series of the first derivative of sexp_b(z), developed around b=2. The samples used to generate this series were accurate to better than 10^-41. The a0 coefficient is the first derivative of \( \text{sexp}_2(z) \). I have similar approximation taylor series for all of the other derivatives. Here, the taylor series gives the first derivative for sexp(z) for bases, in the neighborhood base=2. Notice how the behavior changes around the 115th taylor series term, as the function switches from appearing to be dominated by the singularity at b=1, and switches to being dominated more and more by the branch singularity at the at eta, which is closer. The more terms in the taylor series that are included, the more the taylor series is limited by the branch singularity at eta, which leads to the difference between the clockwise versus counterclockwise sexp(z) functions. For the example below, where I was attempting to calculate sexp(z) for bases in the neighborhood of b=2. Up to a certain point, which appears to be around 30 decimal digits accuracy, with a pseudo convergence limited by the singularity at b=1, and the branch singularity at eta has little effect. But after that the branch singularity at eta effects numerical results, and eventually limits the convergence radius to 2-eta.
This set of series for each of the taylor series coefficients developed in the neighborhood of base=2 is more accurate than the earlier one I posted. It would give improved results, with better accuracy around 1E-35 for bases inside the radius of convergence, and accuracy of 3E-33 for base eta.
I also generated the Taylor series for sexp(-0.5), as the base is varied, centered around base=2. The pattern changes near the 94th taylor series coefficient, when the closer singularity at base=eta begins contributing more to the taylor series than the more distant singularity at base=1. Here, a0=0.544764121459556733980121885825724470, which is sexp(-0.5) for base=2, and the taylor series is accurate to about 35 decimal digits.
- Sheldon
02/06/2016, 01:37 AM (This post was last modified: 02/06/2016, 02:02 AM by Gottfried.)
Hmm, today I tried tetcomplex.gp with init(I). Unfortunately the program hangs/loops infinitely after the message
generating Schroder2 taylor series for isuperf2 function, scnt2 27
Using a real base, say init(1.44) works immediately.
What to do?
update: I found that it hangs in the routine "loop" when called from the "init(I)"-procedure. (Surprisingly "init()" doesn't provide an argument to the routine loop while that has a formal parameter "t")
update2: it enters "thetaup" and does not come back...
02/06/2016, 04:04 PM (This post was last modified: 02/06/2016, 04:35 PM by sheldonison.)
(02/06/2016, 01:37 AM)Gottfried Wrote: Hmm, today I tried tetcomplex.gp with init(I). Unfortunately the program hangs/loops infinitely after the message
generating Schroder2 taylor series for isuperf2 function, scnt2 27
Using a real base, say init(1.44) works immediately.
What to do?
update: I found that it hangs in the routine "loop" when called from the "init(I)"-procedure. (Surprisingly "init()" doesn't provide an argument to the routine loop while that has a formal parameter "t")
update2: it enters "thetaup" and does not come back...
Gottfried
hmmm, I haven't used tetcomplex in awhile. It is much much more limited in terms of what bases it will converge for than the newer fatou.gp program. For example, tetcomplex has no hope of converging anywhere near base(i), whereas fatou.gp works just fine. Also, the old program has no "fallback" algorithm to use for rationally indifferent fixed points, that are on the ShellThron boundary, whereas as the newer algorithm can work (although with less precision), without any Schroeder function whatsoever. The newer algorithm is much better, except for bases<eta, where someday I may allow for rotation angles >180 degrees for fatou.gp, but not yet. By the way, I finally had some time to post my answer to sexp base(i) on mathstack; http://math.stackexchange.com/questions/...35#1643235. I wojuld like to post more about fatou.gp here on the tetration forum as well.
Code:
\r fatou.gp
setmaxconvergence(); /* base i is hard to compute */
sexpinit(i);
sexp(0.5)
1.07571355731392 + 0.873217399108003*I
02/06/2016, 05:36 PM (This post was last modified: 02/06/2016, 07:27 PM by Gottfried.)
[update] Ah, got it working with Pari/GP v. 2.7 in a winxp-32bit virtual machine. Great, so I can recompute my example in MSE. I'll look for the incompatibility reasons and shall tell them later.[/update]
Here is the comparision of the regular/Schröder-solution and the (extended/generalized) Kneser-mechanism (see bottom of posting).
(This is the link to the discussion in MSE : http://math.stackexchange.com/questions/...ing-us-all )
02/07/2016, 05:27 AM (This post was last modified: 02/07/2016, 01:23 PM by Gottfried.)
Discussing the (extended) Kneser-method the question of fixpoints is relevant. Here I have produced a picture of the fixpoints of tetration to base(î) , I found 2 simple fixpoint (attracting for exp, attracting for log, both used for the Kneser-method), and three periodic points - making things a bit more complicated. The fixpoints were sought using the Newton-algorithm for the joint threefold exponentiation \( f(z)= \log_i(\log_i(\log_i(z))) \) and the iteration \( z_{k+1} = f(z_k) \) .
This means for example, that for the point in the top-left edge with the blue color having the z-value z_0=-5 - 5i the Newton-algorithm using iteration \( z_{k+1} = f(z_k) \) arrives at the fixpoint 3.0 (having the complex value of about -1.14+0.71I in a moderate number of iterations. The blue point at coordinate z_0=-2.5+1I needs less iterations and the a bit lighter blue points near the periodic point 3.0 need even less iterations.
Perhaps this post should be moved into a discussion of the Kneser-method or of the general problem of fixpoints.
02/07/2016, 12:28 PM (This post was last modified: 02/07/2016, 12:37 PM by sheldonison.)
(02/06/2016, 05:36 PM)Gottfried Wrote: Discussing the (extended) Kneser-method the question of fixpoints is relevant. Here I have produced a picture of the fixpoints of tetration to base(î) , I found 2 simple fixpoint (attracting for exp, attracting for log, both used for the Kneser-method), and three periodic points - making things a bit more complicated. The fixpoints were sought using the Newton-algorithm for the joint threefold exponentiation \( f(z)= \log_i(\log_i(\log_i(z))) \) and the iteration \( z_{k+1} = f(z_k) \) .
This means for example, that for the point in the top-left edge with the blue color having the z-value z_0=-5 - 5i the Newton-algorithm using iteration \( z_{k+1} = f(z_k) \) arrives at the fixpoint 3.0 (having the complex value of about -1.14+0.71I in a moderate number of iterations. The blue point at coordinate z_0=-2.5+1I needs less iterations and the a bit lighter blue points near the periodic point 3.0 need even less iterations.
Perhaps this post should be moved into a discussion of the Kneser-method or of the general problem of fixpoints.
Gottfried
Here is the picture:
Yes, this thread should be moved; it has to do with fatou.gp, and an MSE question for tetration base(i), and the two primary fixed points for Henryk Trapmann's uniqueness sickle.
In the case at hand, the other fixed point, for the lower half of the complex plane for sexp(z), is -1.862-0.411i, which Gottfried has listed strangely as 3-periodic, where as it is a simple repelling fixed point for \( i^z \). The Abel/Slog uniqueness sickle connects the two primary fixed points. For exp(z), both fixed points are repelling for exp(z). But if you move slowly from base(e), to base(i), you see the lower fixed point becomes -1.862-0.411i, which is still repelling, but the upper fixed point becomes attracting; 0.4383+0.3606i. The solution I posted on MSE is based on generated the slog(z) exactly between the two primary fixed points, which is what the fatou.gp program does. My answer on MSE includes the taylor series for p(z), which turns out to have a remarkably mild singularity at the two fixed points; finding that analytic Taylor series is the basis for the fatou.gp program solution, which leads directly to Henryk's uniqueness sickle.
\( \alpha(z)=\frac{\ln(z-l_1)}{\ln(\lambda_1)} + \frac{\ln(z-l_2)}{\ln(\lambda_2)} + p(z)\;\; \) Abel function