• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 complex base tetration program mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 03/10/2012, 06:59 AM (This post was last modified: 03/10/2012, 07:00 AM by mike3.) Neat stuff. I'm curious: how did you get the method to work? That is, when we take the Abel function of the Taylor approximation at z = 0 for the Fourier integral to get the warping map (the $\theta(z)$ mapping), how do you make sure that the values going into the Abel function are within its range of convergence? That seems to be the trick part that makes it difficult to extend the method to various other complex bases. sheldonison Long Time Fellow Posts: 630 Threads: 22 Joined: Oct 2008 03/10/2012, 08:53 PM (This post was last modified: 03/10/2012, 08:53 PM by sheldonison.) (03/10/2012, 06:59 AM)mike3 Wrote: Neat stuff. I'm curious: how did you get the method to work? That is, when we take the Abel function of the Taylor approximation at z = 0 for the Fourier integral to get the warping map (the $\theta(z)$ mapping), how do you make sure that the values going into the Abel function are within its range of convergence? That seems to be the trick part that makes it difficult to extend the method to various other complex bases.Hey Mike, Thanks for commenting. As far as the Abel function, goes, I do that evaluation over a unit length from -0.5 to 0.5, and I extended the imaginary delta to 0.175i, for the upper superfunction theta, and -0.175i for the lower superfunction eta. This helps remove ambiguity on "which logarithm branch" to use, and I also have some code tweaks to make sure the inverse Superfunction (or Abel function) is mapping to the same period for all sample points. The other problem is deciding how accurate the Schroder function needs to be. The program iterates logarithms (for the repelling case), before evaluating the Schroder function, until the sample point is within some bounds of L, where the Schroder function is accurate. One difficulty is how to know what to use as the bounds, and I'm still tweaking that, with adjustments for bases near eta, and will put an update of the code online shortly. Finally, I initialize and renormalize the superfunction so that B^sexp(-0.5+/-0.175i)=sexp(0.5+/-0.175i), which at least gives continuity to the theta calculations. This dramatically improves the numer of bits precision improvement I get at each iteration. Right now, I'm numerically investigate the branchpoint singularity at eta, which is incredibly mild, as you and others have noticed, and I will post some surprising results about that. - Sheldon mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 03/11/2012, 12:27 AM (This post was last modified: 03/11/2012, 10:07 PM by mike3.) (03/10/2012, 08:53 PM)sheldonison Wrote: (03/10/2012, 06:59 AM)mike3 Wrote: Neat stuff. I'm curious: how did you get the method to work? That is, when we take the Abel function of the Taylor approximation at z = 0 for the Fourier integral to get the warping map (the $\theta(z)$ mapping), how do you make sure that the values going into the Abel function are within its range of convergence? That seems to be the trick part that makes it difficult to extend the method to various other complex bases.Hey Mike, Thanks for commenting. As far as the Abel function, goes, I do that evaluation over a unit length from -0.5 to 0.5, and I extended the imaginary delta to 0.175i, for the upper superfunction theta, and -0.175i for the lower superfunction eta. This helps remove ambiguity on "which logarithm branch" to use, and I also have some code tweaks to make sure the inverse Superfunction (or Abel function) is mapping to the same period for all sample points. The other problem is deciding how accurate the Schroder function needs to be. The program iterates logarithms (for the repelling case), before evaluating the Schroder function, until the sample point is within some bounds of L, where the Schroder function is accurate. One difficulty is how to know what to use as the bounds, and I'm still tweaking that, with adjustments for bases near eta, and will put an update of the code online shortly. Finally, I initialize and renormalize the superfunction so that B^sexp(-0.5+/-0.175i)=sexp(0.5+/-0.175i), which at least gives continuity to the theta calculations. This dramatically improves the numer of bits precision improvement I get at each iteration. What do you mean by "mapping to the same period"? (03/10/2012, 08:53 PM)sheldonison Wrote: Right now, I'm numerically investigate the branchpoint singularity at eta, which is incredibly mild, as you and others have noticed, and I will post some surprising results about that. - Sheldon What did you find? sheldonison Long Time Fellow Posts: 630 Threads: 22 Joined: Oct 2008 03/12/2012, 04:20 AM (This post was last modified: 03/12/2012, 04:30 AM by sheldonison.) (03/11/2012, 12:27 AM)mike3 Wrote: (03/10/2012, 08:53 PM)sheldonison Wrote: Hey Mike, Thanks for commenting. As far as the Abel function, goes, I do that evaluation over a unit length from -0.5 to 0.5, and I extended the imaginary delta to 0.175i, for the upper superfunction theta, and -0.175i for the lower superfunction eta. This helps remove ambiguity on "which logarithm branch" to use, and I also have some code tweaks to make sure the inverse Superfunction (or Abel function) is mapping to the same period for all sample points..... What do you mean by "mapping to the same period"?What I was refering to was the last step in evaluating the inverse superfunction (or Abel function) to generate theta, after evaluating the Schroder function, and adjusting for iterated logarithms. This last step is: $\log(\text{Schroder}(z-L))/\log(L\times\log(\text{base}))$ This logairthm of the Schroder function can be ambiguous, especially if fixup is required due to the accuracy range of the Schroder function. And the easiest fix is to compare adjacent points for the inverse superfunction, and to add or subtract the period so that the adjacent points are near each other. Quote: (03/10/2012, 08:53 PM)sheldonison Wrote: Right now, I'm numerically investigate the branchpoint singularity at eta, which is incredibly mild, as you and others have noticed, and I will post some surprising results about that. - Sheldon What did you find? I'm still investigating and learning, and will post more details later. The experiment, is to develop taylor series for all of the coefficients of sexp_b(z), developed around complex sexp(z) in the neighborhood of base=2. The conundrum, which I'm beginning to be able to explain, is that the numerical results are way too good, and its hard to see the effects of the branch point at eta. In fact, numerically convergence seems limited by the more distant singularity for sexp_b(z), at b=1, and not at all by the closer singularity at eta. So far, I have a taylor series for all the coefficients for the neighborhood of base=2, and the resulting sexp(z) is accurate to 33 decimal digits. At eta, it is accurate to 31 decimal digits. Theory says the extrapolated sexp_b(z) function shouldn't even converge if the radius is outside 2-eta, but for the truncated series, that is not at all the case. For example, results for base e are still accurate to 19 decimal digits, and for base 1.3, the results are accurate to about 20 decimal digits. Precision declines nearly linearly from the sample radius (2-eta)=~0.555, towards the more distant singularity at b=1, with radius=1. More details later. I also put a new version of the code in the first post, with improvements near eta, and convergence for ultra high precision results for >p 67. - Sheldon sheldonison Long Time Fellow Posts: 630 Threads: 22 Joined: Oct 2008 03/16/2012, 07:42 PM (This post was last modified: 03/19/2012, 10:52 PM by sheldonison.) Quote:... The experiment, is to develop taylor series for all of the coefficients of sexp_b(z), developed around complex sexp(z) in the neighborhood of base=2. The conundrum, which I'm beginning to be able to explain, is that the numerical results are way too good, and its hard to see the effects of the branch point at eta. In fact, numerically convergence seems limited by the more distant singularity for sexp_b(z), at b=1, and not at all by the closer singularity at eta.... I have a good way of explaining why the branch point at $\eta=\exp(1/e)$ is so slight. The graph below is for base $\eta-0.25\approx1.195$ Start with this formula for the merged sexp(z), after rotating counterclockwise around eta. Of course, there is a corresponding formula in terms of the lower repelling superfunction too, but the upper attracting superfunction is of primary interest here. Note that the attracting superfunction is real valued at the real axis, however, the theta(z) means that the sexp(z) has a small negative imaginary component at the real axis, for the counterclockwise sexp(z), and a small positive imaginary component for the clockwise sexp(z). $\text{sexp}_{+\pi}(z)=\text{superf_u(z+\theta_u(z))$ (counterclockwise) $\text{sexp}_{-\pi}(z)=\text{superf_u(z+\overline{\theta_u(z)})$ (clockwise)     Here, the graph on the left is for superf from the attracting fixed point, and the two graphs in the middle show the merged sexp(z) from both fixed points, rotating counter clockwise around eta, vs rotating clockwise around eta. The sexp(z) function looks much more like the attracting fixed point superfunction, than the repelling fixed point superfunction on the right. In fact, for $b=\eta-0.25$ at the real axis, $\theta_u(z)$ is an analytic function, dominated almost entirely by the first overtone, and very nearly equal to zero, with a peak negative imaginary component of about 7.06*10^-13. The graph below shows the 1-cyclic $\theta_u(z)$ function at the real axis, with its negative imaginary component in magenta, where the attracting superfunction(0) has been centered so that superfunction(0)=1.     $\theta_u(z)$ is very different than $\theta_l(z)$ which is for the repelling superfunction, which has a singularity at the real axis. So the clockwise versus counterclockwise functions are both nearly identical at the real axis, since both are very nearly identical to the attracting superfunction, depending on how small $\theta_u(z)$ is. First of all, $\theta_u(z)$ by definition converges to a constant as z goes to imag(infinity). $\theta_u(z)=\sum_{n=0}^{\infty}t_n\times\exp(z\times 2n\pi i)$ It seems logical to assume that somewhere between the Period of the attracting superfunction, and the period of the repelling superfunction, $\theta_u(z)$ will have its singularity, and this matches the computational results. Also, the period of the attracting and repelling superfunctions are both functions of L. $\text{period}=2\pi i / \log(\log(L))$. In the neighbordhood of $\eta$, L(b) itself is an analytic function (real valued for b180 degrees for fatou.gp, but not yet. By the way, I finally had some time to post my answer to sexp base(i) on mathstack; http://math.stackexchange.com/questions/...35#1643235. I wojuld like to post more about fatou.gp here on the tetration forum as well. Code:\r fatou.gp setmaxconvergence(); /* base i is hard to compute */ sexpinit(i); sexp(0.5) 1.07571355731392 + 0.873217399108003*I - Sheldon Gottfried Ultimate Fellow Posts: 757 Threads: 116 Joined: Aug 2007 02/06/2016, 05:36 PM (This post was last modified: 02/06/2016, 07:27 PM by Gottfried.) [update] Ah, got it working with Pari/GP v. 2.7 in a winxp-32bit virtual machine. Great, so I can recompute my example in MSE. I'll look for the incompatibility reasons and shall tell them later.[/update] Here is the comparision of the regular/Schröder-solution and the (extended/generalized) Kneser-mechanism (see bottom of posting). (This is the link to the discussion in MSE : http://math.stackexchange.com/questions/...ing-us-all ) ------------------------------------------------------- Hi Sheldon - this is what I've done and what I've got with the just-downloaded fatou.gp: (this is the old Pari/GP 2.2.11-version) Code:\r f:\download\fatou.gp     seriesprecision = 21 significant terms    format = g0.15 help(); help2(); andrewjay(); for other functions \p 38  /* precis=38; 32-35 digits.  default \p 28 ~=24 decimal digits; */ /* generates Abel function for iterating z <= exp(z)-1+k; f(z) */ loop(k,nlim,nskip,looplim); sexpinit(b); /* b=exp(exp(k-1)); */ loop(1);  sexpinit(exp(1));  /* two examples for tetration for base e */ slog(z); sexp(z); abel(z); invabel(z,est); sexptaylor(center,radius,samples); slogtaylor(c,r,s); invabeltaylor(c,r,s); abeltaylor(c,r,s); fmode=0:abel  1:invabel  2:slog  3:sexp MakeGraph(width,height,x0,y0,x1,y1,filename, n); /* f(z); fmode */ debugprint=0; quietmode=0; x2mode=0; /* x2mode=1; iterate z^2+z+k */ prtpoly(wtaylor,t,name); setmaxconvergence(); /* base i is hard to compute */ thlogk=1; ctr=19/20; ir=57/64; ctfactor=85/100; disabautoctfactor=1; staylorstop=40; sexpinit(I);     seriesprecision = 21 significant terms    format = g0.15 1 0.474349095548301 0.0458093729068993 23 4 20 2 0.236926728615837 0.409974883179566 41 6 40 3 3.91014217666629 E144 4.33918410729562 E143 61 7 60   *** vector: negative number of components in vector. sexp(0.5)    *** if: incorrect type in comparison. Attached Files Image(s)     Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 757 Threads: 116 Joined: Aug 2007 02/07/2016, 05:27 AM (This post was last modified: 02/07/2016, 01:23 PM by Gottfried.) Discussing the (extended) Kneser-method the question of fixpoints is relevant. Here I have produced a picture of the fixpoints of tetration to base(î) , I found 2 simple fixpoint (attracting for exp, attracting for log, both used for the Kneser-method), and three periodic points - making things a bit more complicated. The fixpoints were sought using the Newton-algorithm for the joint threefold exponentiation $f(z)= \log_i(\log_i(\log_i(z)))$ and the iteration $z_{k+1} = f(z_k)$ . This means for example, that for the point in the top-left edge with the blue color having the z-value z_0=-5 - 5i the Newton-algorithm using iteration $z_{k+1} = f(z_k)$ arrives at the fixpoint 3.0 (having the complex value of about -1.14+0.71I in a moderate number of iterations. The blue point at coordinate z_0=-2.5+1I needs less iterations and the a bit lighter blue points near the periodic point 3.0 need even less iterations. Perhaps this post should be moved into a discussion of the Kneser-method or of the general problem of fixpoints. Gottfried Here is the picture: Attached Files Image(s)     Gottfried Helms, Kassel sheldonison Long Time Fellow Posts: 630 Threads: 22 Joined: Oct 2008 02/07/2016, 12:28 PM (This post was last modified: 02/07/2016, 12:37 PM by sheldonison.) (02/06/2016, 05:36 PM)Gottfried Wrote: Discussing the (extended) Kneser-method the question of fixpoints is relevant. Here I have produced a picture of the fixpoints of tetration to base(î) , I found 2 simple fixpoint (attracting for exp, attracting for log, both used for the Kneser-method), and three periodic points - making things a bit more complicated. The fixpoints were sought using the Newton-algorithm for the joint threefold exponentiation $f(z)= \log_i(\log_i(\log_i(z)))$ and the iteration $z_{k+1} = f(z_k)$ . This means for example, that for the point in the top-left edge with the blue color having the z-value z_0=-5 - 5i the Newton-algorithm using iteration $z_{k+1} = f(z_k)$ arrives at the fixpoint 3.0 (having the complex value of about -1.14+0.71I in a moderate number of iterations. The blue point at coordinate z_0=-2.5+1I needs less iterations and the a bit lighter blue points near the periodic point 3.0 need even less iterations. Perhaps this post should be moved into a discussion of the Kneser-method or of the general problem of fixpoints. Gottfried Here is the picture: Yes, this thread should be moved; it has to do with fatou.gp, and an MSE question for tetration base(i), and the two primary fixed points for Henryk Trapmann's uniqueness sickle. In the case at hand, the other fixed point, for the lower half of the complex plane for sexp(z), is -1.862-0.411i, which Gottfried has listed strangely as 3-periodic, where as it is a simple repelling fixed point for $i^z$. The Abel/Slog uniqueness sickle connects the two primary fixed points. For exp(z), both fixed points are repelling for exp(z). But if you move slowly from base(e), to base(i), you see the lower fixed point becomes -1.862-0.411i, which is still repelling, but the upper fixed point becomes attracting; 0.4383+0.3606i. The solution I posted on MSE is based on generated the slog(z) exactly between the two primary fixed points, which is what the fatou.gp program does. My answer on MSE includes the taylor series for p(z), which turns out to have a remarkably mild singularity at the two fixed points; finding that analytic Taylor series is the basis for the fatou.gp program solution, which leads directly to Henryk's uniqueness sickle. $\alpha(z)=\frac{\ln(z-l_1)}{\ln(\lambda_1)} + \frac{\ln(z-l_2)}{\ln(\lambda_2)} + p(z)\;\;$ Abel function - Sheldon « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post new fatou.gp program sheldonison 19 16,027 08/24/2019, 04:32 PM Last Post: Ember Edison Natural complex tetration program + video MorgothV8 1 1,411 04/27/2018, 07:54 PM Last Post: MorgothV8 Mathematica program for tetration based on the series with q-binomial coefficients Vladimir Reshetnikov 0 1,794 01/13/2017, 10:51 PM Last Post: Vladimir Reshetnikov C++ program for generatin complex map in EPS format MorgothV8 0 2,407 09/17/2014, 04:14 PM Last Post: MorgothV8 Green Eggs and HAM: Tetration for ALL bases, real and complex, now possible? mike3 27 30,535 07/02/2014, 10:13 PM Last Post: tommy1729 "Kneser"/Riemann mapping method code for *complex* bases mike3 2 6,248 08/15/2011, 03:14 PM Last Post: Gottfried fractional iteration with complex bases/a bit of progress Gottfried 1 3,570 07/21/2008, 10:58 PM Last Post: Gottfried

Users browsing this thread: 1 Guest(s)