[AIS] (alternating) Iteration series: Half-iterate using the AIS? - Printable Version +- Tetration Forum ( https://math.eretrandre.org/tetrationforum)+-- Forum: Tetration and Related Topics ( https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=1)+--- Forum: Mathematical and General Discussion ( https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=3)+--- Thread: [AIS] (alternating) Iteration series: Half-iterate using the AIS? ( /showthread.php?tid=760) |

RE: Iteration series: Half-iterate using the infinite iteration-series? - Nasser - 12/14/2012
Half-iterate can only be defined if we define someting in third hyper operation called "power level". because we always assume these levels as integers for example 2^x , 2^2^x , etc.. in the example, number 2 is powered to the first power level and also to the second power level. but we shall study it also to non-integers level. this study must be related to third hyper operation (the exponential function) before we jump to fourth hyper opertation (tetration). if we can find solutions for that, it means half-itrated could be defined. for me it is clear that: (n^^x)^(*)(n^^x) = x , while * means "to the half power level" RE: Iteration series: Half-iterate using the infinite iteration-series? - Gottfried - 12/14/2012
Here is some explanation in terms of Pari/GP-code, how the serial alternating iteration sums asum(x) can be expressed/computed with the help of power series. Preliminaries: Globale variable for base, log of the base and dim for dimension. Then the stdcall for the exponential/logarithm with a variable (but integer) heightparameter: Code: `[tb = 1.3, tl=log(tb)]` First, the formula for the alternating iteration series; this is "asum(x)". This is simply the summa-tion of the iterated exponentials/logarithms. Because we want to compare this with a true power series solution (via the Carleman-matrix-concept) where we have to separate this in two parts, we do this here too; the alternating iteration series towards fp0 is asump(x), and that towards the fp1 it is asumn(x). After adding both two segments we have to reduce the joint sum by x because it was doubly accounted for: Code: `asump(x)= sumalt(k=0,(-1)^k*nextx(x, k))` Now we create Carleman-matrices for that sums, that means we shall get power series for that sums. First we need the Carleman-matrices for tb^x-1 developed around fp0 and that developed around fp1. These are the "Carl0" and the "Carl1"-matrices. Then the matrices, which provide the coefficients for the power-series for the alternating sums, are created using the Neumann-series-expression for that Carlemanmatrices (CarlAsp,CarlAsn) Note the small difference in the formula for the CarlAsn compared to that of CarlAsp. \\ Series and Carleman-matrix related to fixpoint fp0 (= 0) Code: `coeffs0 = polcoeffs(exp(tl*(x+fp0))-1 - fp0, dim);coeffs0[1]=0; ` Let us ignore the integer-height iterations to shift an argument x sufficiently towards the fixpoints, then the formulae for the power-series-solutions are where the coefficients p are taken from the coeffs_asp-array. where the coefficients n are taken from the coeffs_asn-array and t denotes the upper fixpoint fp1. Now we test these functions: Code: `\\ test that at x0=1.0` The errors with power series truncated to 64 terms are far smaller than 1e-100. So we should accept this as very good approximation and I think it is a reasonable hypothese to assume it a true analyti-cal solution. However, for me it is somehow unusual to have two power series to include in the computation, and also the initial "shift" of the x-argument towards the resp. fixpoints by integer iterations, such that the power series converge. I have no idea, how, for instance, could this process then be inverted and get a power series (even if only a formal power series) assigned to... Here are the first few coefficients for the power series for reference for you, if you want to check this. Column 1 and 2 for the b^x-1 resp log(1+x)/log(b)-expressions, then column 3 for asump_mat and column 4 for asumn_mat . RE: Iteration series: Half-iterate using the infinite iteration-series? - sheldonison - 12/15/2012
(12/13/2012, 02:49 AM)sheldonison Wrote:(12/11/2012, 01:16 AM)Gottfried Wrote: Hi Sheldon -...its an interesting summation, and probably leads to an analytic abel function via, . Gottfried, This new superfunction is intriguing to me, so I generated a Taylor series for it. This alternating sum function seems to be clearly a different solution than anything we've seen before, and it is analytic in the complex plane. I generated a Taylor series for the superfunction generated from the asum inverse abel function, such that . This is the inverse of the abel function generated via the formula I posted earlier for the abel function, . The taylor series is posted below, and you can cut and paste it into pari-gp. The code to generate the taylor series was involved and complicated, but the results below are accurate to about 32 decimal digits. My initial observations are that this function is 7x closer to the upper fixed point than it is to the lower fixed point. It is not at the midway point or the average of the two functions. The nearest singularity appears to be near z=-0.45075+2.5642i, though I don't understand exactly what causes the nearest singularity, but the graphs clearly show the singularity there, and I assume there are probably other singularities nearby. I only worked with one case, which is also conjugate to tetration base so I was able to use kneser.gp code to generate the two schroder based super functions from the upper and lower fixed points. The algorithm I used to generate the alternating sum superfunction involves generating the inverse of the abel function such that for complex z. Emperically, the amplitude=0.012759644622895086738385654211903016. Start with the equation . Use an iterative search for the inverse asum to find the corresponding value of z, for each individual z in a unit circle in the complex plane. Then I used those results for a cauchy integral to generate the taylor series. My algorithm involved the intermediate step of generating a fourier series for such that . I could post the results separately if interested. - Sheldon Code: `{asumseries=` RE: Iteration series: Half-iterate using the infinite iteration-series? - Gottfried - 12/15/2012
(12/14/2012, 07:16 PM)Gottfried Wrote: Here is some explanation in terms of Pari/GP-code, how the serial alternating iteration sums asum(x) can be expressed/computed with the help of power series.Just a short, but useful addendum: if we assume identity of the serial comnputation of asum(x) via the Pari/GP-sumalt-procedure and that via the Neumann-series-matrices asum_mat(x) and its power series then one should consider to use that second method as its standard basis. I toyed a bit around with differentating and integrating using the asum(x) and found, that the asum_mat(x) needs only about 1/20 of the computation time, so my example integral needed 80 000 msec with the serial implementation of the asum(x) but only 4 000 msec using the power-series implementation. This should also be useful for the computation of the inverse of asum(x) as long as we need to interpolate it by binary search/Newton-method, where many function calls are needed. I think moreover, that this shall prove useful, once we shall step further to analytically continue the range for the x and for the base b outside the "safe intervals" and enter the realms of truly divergent series for the asum(x). Gottfried Additional readings: An early(2008 ) discussion of this method and some of the problems, which we seemingly can resolve now, but also a (very natural) view into regions of bases outside the Euler-summable range for the serial computation of the asum(x) is here http://go.helms-net.de/math/tetdocs/Tetraseriesproblem.pdf An involved discussion (2007) about the ability of the Neumann-type matrix for the asum(x) to represent an analytical continuation for the divergent cases - the matrix-ansatz was crosschecked against a shanks-summation in the range, where the shanks-summation was computable: http://go.helms-net.de/math/tetdocs/IterationSeriesSummation_1.htm RE: Iteration series: Half-iterate using the infinite iteration-series? - Gottfried - 12/15/2012
(12/15/2012, 04:47 AM)sheldonison Wrote: Gottfried, Hi Sheldon - this sounds really great; at the moment (saturday morning, 7 o'clock) it's a bit over my head, but I'll come back to it later. Cool, that you already did the view into the complex plane, that is all much curious... What about the cosine instead of the sine? You introduced the cosine in the formula: is there some optimization argument for it? I think it shifts the height-zero-definition from to the -position and must be compensated elsewhere in the formulae? Gottfried RE: Iteration series: Half-iterate using the infinite iteration-series? - sheldonison - 12/15/2012
(12/15/2012, 06:49 AM)Gottfried Wrote: Hi Sheldon -Hey Gottfried, I think using cosine vs sine is completely arbitrary. has a period of 2, so to switch from cosine to sine requires a shift of -0.5. To generate the function, I needed to empirically calculate a very accurate maximum amplitude for your asum function, so I think that's when I started centering on the cosine(z) maximum amplitude of asum, as opposed to the sin(z) zero crossing of asum. I also noticed the maximum occurs approximately at the midway point between the two fixed points, so I kept working with cosine after that. Here is the equivalent series, shifted by -0.5, for the . If you want the series centered anywhere else, +/-2.35i in the complex plane, I can provide that too, with equivalent accuracy. Code: `{asumsine=` RE: Iteration series: Half-iterate using the infinite iteration-series? - tommy1729 - 12/15/2012
I agree with sheldon. regards tommy1729 RE: Iteration series: Half-iterate using the infinite iteration-series? - Gottfried - 12/15/2012
(12/15/2012, 01:43 PM)sheldonison Wrote: Hey Gottfried, Hi Sheldon - Ok, it might have been the case that there was some advantage that I had not seen ... For me the best option seemed to be the sine, because I feel that norming x0 where asum(x0)=0 has some flair of natural-ity, and then to search left and right from there to the extrema to find the half-iterates alludes much to a sine wave. One day I must learn to understand the cauchy-integral and riemann-mapping subjects - perhaps I can ask you another day for some tutoring? Most often I need only the first key-step into a matter and can then proceed on my own. (However, at the moment it could be I don't get my head free enough, the two classes need some attention and special preparation before the break around christmas and new year, but let's see) So far only for a short response - Gottfried RE: Iteration series: Half-iterate using the infinite iteration-series? - Gottfried - 12/16/2012
(12/15/2012, 01:43 PM)sheldonison Wrote:Hi Sheldon - I just crosschecked your function with my asum_hgh(x)-function: Code: `h1 = 0.5` It's sunday, I'll come back to this perhaps in the evening. Thanks so far! Gottfried RE: Iteration series: Half-iterate using the infinite iteration-series? - sheldonison - 12/17/2012
I decided to compare the asum function for tetration base sqrt(2), to the bummer thread on this forum. has an upper fixed point of 4 and a lower fixed point of 2. You can compare this image to the last image in the post I linked to. The base(sqrt(2)) has been discussed a great deal on this forum, so I thought it would be a good comparison study. The differences between these functions are extremely tiny, so the differences are scaled by 10^25 so as to fit on the same plot which shows a little bit of the superfunction going from the upper fixed point of 4 down towards the lower fixed point of 2, with the graph centered at 3. [attachment=981] Once again, the asum super function function is not an average of the superfunctions from the two fixed points. As you can see from the graph, the asum superfunction is much closer to the upper fixed point superfunction, which is entire, than to the lower fixed point superfunction, which is also imaginary periodic in the complex plane, but has logarithmic singularities. At the real axis near the neighborhood of 3, the asum itself has a very small amplitude, of approximately 7.66E-12, as compared to an amplitude of 0.012 for Gottfried's asum(1.3^z-1) example. Also, the asum needs a lot more iterations for the sqrt(2) base than for Gottfried's example because of slower convergence towards the fixed points. If you take the asum of the upper fixed point superfunction, than the asum has a period of 2, and the third harmonic harmonic has a magnitude of 1.186E-36. The even harmonics of the asum of the superfunction from the fixed point are zero since by definition asum(z)=-asum(z+1), and this is only true for odd harmonics, not odd harmonics. As imag(z) increases towards >8.5*I, the differences between these three functions becomes larger and large in amplitude, becoming macroscopic instead of microscopic. The difference increase with a scaling factor of . At some point, the higher harmonics become visible too, and one expects the three functions to diverge once the upper fixed point superfunction is no longer converging towards 2 as real of z increases. Eventually the asum superfunction is probably dominated by singulariies; but I conjecture the asum superfunction is not imaginary periodic like the two fixed point superfunctions. - Sheldon |