• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 [AIS] (alternating) Iteration series: Half-iterate using the AIS? Nasser Junior Fellow Posts: 9 Threads: 3 Joined: Nov 2012 12/14/2012, 05:06 PM Half-iterate can only be defined if we define someting in third hyper operation called "power level". because we always assume these levels as integers for example 2^x , 2^2^x , etc.. in the example, number 2 is powered to the first power level and also to the second power level. but we shall study it also to non-integers level. this study must be related to third hyper operation (the exponential function) before we jump to fourth hyper opertation (tetration). if we can find solutions for that, it means half-itrated could be defined. for me it is clear that: (n^^x)^(*)(n^^x) = x , while * means "to the half power level" Gottfried Ultimate Fellow Posts: 764 Threads: 118 Joined: Aug 2007 12/14/2012, 07:16 PM (This post was last modified: 03/26/2015, 03:12 PM by Gottfried.) Here is some explanation in terms of Pari/GP-code, how the serial alternating iteration sums asum(x) can be expressed/computed with the help of power series. Preliminaries: Globale variable for base, log of the base and dim for dimension. Then the stdcall for the exponential/logarithm with a variable (but integer) heightparameter: Code:[tb = 1.3, tl=log(tb)] dim=64    \\ for power series expansion to "dim" terms, and matrix-size \ps 64   \\ for direct access to the iteration height exph(x,h=1) = for(k=1,h, x= exp(x*tl)-1);for(k=1,-h,x=log(1+x)/tl);return(x); glx = 1.0    \\ for sequential access to the iteration by one step {nextx(x,h=0)= if (h==0, glx=x;return(glx));     glx=if(h>0,exph(glx,1)               ,exph(glx,-1));   return(glx); } First, the formula for the alternating iteration series; this is "asum(x)". This is simply the summa-tion of the iterated exponentials/logarithms. Because we want to compare this with a true power series solution (via the Carleman-matrix-concept) where we have to separate this in two parts, we do this here too; the alternating iteration series towards fp0 is asump(x), and that towards the fp1 it is asumn(x). After adding both two segments we have to reduce the joint sum by x because it was doubly accounted for: Code:asump(x)= sumalt(k=0,(-1)^k*nextx(x, k)) asumn(x)= sumalt(k=0,(-1)^k*nextx(x,-k)) asum(x) = asump(x)+asumn(x) - x \\ ------------------------------------------------------------- Now we create Carleman-matrices for that sums, that means we shall get power series for that sums. First we need the Carleman-matrices for tb^x-1 developed around fp0 and that developed around fp1. These are the "Carl0" and the "Carl1"-matrices. Then the matrices, which provide the coefficients for the power-series for the alternating sums, are created using the Neumann-series-expression for that Carlemanmatrices (CarlAsp,CarlAsn) Note the small difference in the formula for the CarlAsn compared to that of CarlAsp. \\ Series and Carleman-matrix related to fixpoint fp0 (= 0) Code:coeffs0 = polcoeffs(exp(tl*(x+fp0))-1  - fp0, dim);coeffs0[1]=0;    print(    coeffs0)   \\ that statement to display coeffs0 in the gp-dialogue Carl0   = matfromser(coeffs0);           \\ Carlemanmatrix for tb^x-1 by powerseries around fp0 (=0) CarlAsp = (matid(dim) + Carl0)^-1      \\ Newtonseries-matrix for Carl0 giving series-coefficients for "asump" (= the altern. series towards fp0) coeffs_asp =CarlAsp[,2]                    \\ that coefficients in a vector,  for computation of "asp_mat(x-fp0)+fp0" = "asp_mat(x)" because fp0=0 \\ Series and Carleman-matrix related to fixpoint fp1 coeffs1 = polcoeffs(exp(tl*(x+fp1))-1  - fp1, dim);coeffs1[1]=0;    print(    coeffs1 )  \\ that statement to display coeffs1 in the gp-dialogue Carl1= matfromser(coeffs1);             \\ Carlemanmatrix for tb^(x+fp1)-1-fp1 by powerseries around fp1 (=???) CarlAsn = (matid(dim) + Carl1^-1)^-1     \\ Newtonseries-matrix for Carl1^-1 giving coefficients for "asumn" (= the altern. series towards fp1) coeffs_asn = CarlAsn[,2]               \\ that coefficients in a vector, for computation of "asn_mat (x-fp1)+fp1" {asump_mat(x)=local(h,su);    while(abs(x-fp0)>0.5,    \\ move the argument x for the powerseries towards fp0            su=su+x; x=exph(x,1);            su=su-x; x=exph(x,1);     );    x = x-fp0;    su= su + sum(k=0,dim-1,x^k*coeffs_asp[1+k]) ;  \\ evaluate asp_mat at x_h    su= su + fp0/2;    return(su); } {asumn_mat(x)=local(h,su);    while(abs(x-fp1)>0.5,          \\ move the argument x for the powerseries towards fp1            su=su+x; x=exph(x,-1);            su=su-x; x=exph(x,-1);     );    x = x-fp1;    su= su + sum(k=0,dim-1,x^k*coeffs_asn[1+k]) ; \\ evaluate asn_mat at x_(-h)    su= su + fp1/2;    return(su); } asum_mat(x)=asump_mat(x)+asumn_mat(x)-x \\ ============================================== Let us ignore the integer-height iterations to shift an argument x sufficiently towards the fixpoints, then the formulae for the power-series-solutions are $\text{asump}(x) = \sum_{k=0}^{\infty} p_k \cdot x^k$ where the coefficients p are taken from the coeffs_asp-array. $\text{asumn}(x) =\frac{t}{2}+ \sum_{k=0}^{\infty} n_k \cdot (x-t)^k$ where the coefficients n are taken from the coeffs_asn-array and t denotes the upper fixpoint fp1. Now we test these functions: Code:\\ test that at x0=1.0 x0=1.0 [aser=asump(x0),amat=asump_mat(x0),err=aser-amat] [aser=asumn(x0),amat=asumn_mat(x0),err=aser-amat] [aser=asum (x0),amat=asum_mat (x0),err=aser-amat] \\         serial           matrix-based        difference %1269 = [0.764698042872,   0.764698042872,   7.30924528016 E-132] %1270 = [0.245204215222,   0.245204215222,   1.43545677556 E-118] %1271 = [0.00990225809450, 0.00990225809450, 1.43545677556 E-118] The errors with power series truncated to 64 terms are far smaller than 1e-100. So we should accept this as very good approximation and I think it is a reasonable hypothese to assume it a true analyti-cal solution. However, for me it is somehow unusual to have two power series to include in the computation, and also the initial "shift" of the x-argument towards the resp. fixpoints by integer iterations, such that the power series converge. I have no idea, how, for instance, could this process then be inverted and get a power series (even if only a formal power series) assigned to... Here are the first few coefficients for the power series for reference for you, if you want to check this. Column 1 and 2 for the b^x-1 resp log(1+x)/log(b)-expressions, then column 3 for asump_mat and column 4 for asumn_mat . $\small \begin{array} {r|r|r|r} tb^x-1 & tb^{(x+fp_1)}-1-fp_1 & coeffsasp(x) & coeffsasn(x) \\ \hline \\ 0 & 0 & 0 & 0 \\ 0.262364264467 & 2.52769252337 & 0.792164376122 & 0.716528582529 \\ 0.0344175036348 & 0.331588094847 & -0.0255084462139 & 0.0127206425725 \\ 0.00300997434198 & 0.0289989555369 & -0.00188959012829 & -0.000764040945147 \\ 0.000197427426075 & 0.00190207240994 & -0.0000721182735413 & 0.0000509718272678 \\ 0.0000103595802856 & 0.0000998071657596 & 0.00000283330292899 & -0.00000353974018266 \\ 0.000000452997276970 & 0.00000436430560552 & 0.000000868177178099 & 0.000000246153936735 \\ 0.0000000169786139111 & 0.000000163576832872 & 0.000000108306933352 & -0.0000000164875359311 \\ 0.000000000556822693809 & 0.00000000536458943004 & 0.0000000106042541295 & 0.000000000995691627932 \\ 1.62322640556E-11 & 1.56386284443E-10 & 0.000000000903757476731 & -4.44139406950E-11 \\ 4.25876601958E-13 & 4.10301724906E-12 & 6.72094524757E-11 & -4.06144047569E-13 \\ 1.01577092206E-14 & 9.78622820588E-14 & 3.96472142050E-12 & 4.80559320387E-13 \\ 2.22084992361E-16 & 2.13963047096E-15 & 9.51525791516E-14 & -8.90297750191E-14 \\ 4.48208966694E-18 & 4.31817365188E-17 & -2.18803878944E-14 & 1.27988586030E-14 \\ \ldots & \ldots & \ldots & \ldots \end{array}$ Gottfried Helms, Kassel sheldonison Long Time Fellow Posts: 640 Threads: 22 Joined: Oct 2008 12/15/2012, 04:47 AM (This post was last modified: 12/23/2012, 05:53 PM by sheldonison.) (12/13/2012, 02:49 AM)sheldonison Wrote: (12/11/2012, 01:16 AM)Gottfried Wrote: Hi Sheldon - ... Then the empirical observation that (it) is sort of "hybrid", ... and the fractional iteration based on it should then as well be taken as "hybrid" of the developments at the two different fixpoints .... I have no idea, how, for instance, could this process then be inverted and get a power series (even if only a formal power series)...its an interesting summation, and probably leads to an analytic abel function via, $\alpha(z)=\frac{1}{\pi}\cos^{-1}(\frac{\text{asum}(z)}{\text{amplitude}})$. Gottfried, This new superfunction is intriguing to me, so I generated a Taylor series for it. This alternating sum function seems to be clearly a different solution than anything we've seen before, and it is analytic in the complex plane. I generated a Taylor series for the superfunction generated from the asum inverse abel function, such that $\alpha^{-1}(z)=\text{superfunction}(z)=1.3^{\text{superfunction}(z-1)}-1$. This is the inverse of the abel function generated via the formula I posted earlier for the abel function, $\text{asum}(\alpha^{-1}(z))=\text{amplitude}\times \cos(\pi z)$. The taylor series is posted below, and you can cut and paste it into pari-gp. The code to generate the taylor series was involved and complicated, but the results below are accurate to about 32 decimal digits. My initial observations are that this function is 7x closer to the upper fixed point than it is to the lower fixed point. It is not at the midway point or the average of the two functions. The nearest singularity appears to be near z=-0.45075+2.5642i, though I don't understand exactly what causes the nearest singularity, but the graphs clearly show the singularity there, and I assume there are probably other singularities nearby. I only worked with one case, $f(x)=1.3^x-1$ which is also conjugate to tetration base $1.3^{1/1.3}$ so I was able to use kneser.gp code to generate the two schroder based super functions from the upper and lower fixed points. The algorithm I used to generate the alternating sum superfunction involves generating the inverse of the abel function such that $\text{asum}(\alpha^{-1}(z))=\text{amplitude}\times\cos(\pi z)$ for complex z. Emperically, the amplitude=0.012759644622895086738385654211903016. Start with the equation $\alpha^{-1}(z)=\text{asum}^{-1}(\text{amplitude}\times\cos(\pi z))$. Use an iterative search for the inverse asum to find the corresponding value of z, for each individual z in a unit circle in the complex plane. Then I used those results for a cauchy integral to generate the taylor series. My algorithm involved the intermediate step of generating a fourier series for $\theta(z)$ such that $\alpha^{-1}(z)=\alpha^{-1}_{\text{upperfixed}}(z+\theta(z))$. I could post the $\theta(z)$ results separately if interested. - Sheldon Code:{asumseries=         4.4326681493968770505571112763721 +x^ 1* -2.3127880792234992209576805587190 +x^ 2* -0.14084258491474819923457781223460 +x^ 3*  0.20335747585717425959215788217730 +x^ 4*  0.039275901513416003532439382454487 +x^ 5* -0.017009465238482374843375971269215 +x^ 6* -0.0068287020794466075850802628591481 +x^ 7*  0.00076014644212133151176635681156923 +x^ 8*  0.00086248272210835707107513090941173 +x^ 9*  0.000071520117050006623094494822900916 +x^10* -0.000077629314222049548560922960618619 +x^11* -0.000022505555300052370351182879813873 +x^12*  0.0000035401828994520591869141075019558 +x^13*  0.0000031483563900756350868343285614899 +x^14*  0.00000032157140688878365331284588360017 +x^15* -0.00000028558473156302855917778930719633 +x^16* -0.00000010334120760488941005250412797733 +x^17*  0.000000012943612612189077177828480927098 +x^18*  0.000000015360572479134856021220865544942 +x^19*  0.0000000011036019290515980548941065490902 +x^20* -0.0000000016730425874414457995630327124141 +x^21* -0.00000000036721227769768456656696964685516 +x^22*  1.4162975891198095985445333625383 E-10 +x^23*  5.8592098019216975332064455548172 E-11 +x^24* -8.0401538929341986731328293884869 E-12 +x^25* -7.3416487850088416632084519252185 E-12 +x^26* -1.5722225445945495608665708043910 E-13 +x^27*  7.8120308439624867309714945272883 E-13 +x^28*  1.5236013728170180703553240190130 E-13 +x^29* -6.6440204264056669379101686704698 E-14 +x^30* -3.3256155209233008361117491546045 E-14 +x^31*  2.7883958953618392771683428904884 E-15 +x^32*  5.4048795650959664939329068427242 E-15 +x^33*  4.8862748181200020531353685294020 E-16 +x^34* -7.2850552648689176555739757750074 E-16 +x^35* -1.6903839033767469660939690985276 E-16 +x^36*  8.1615817130139369058635852175326 E-17 +x^37*  3.2441786112344646197997328754328 E-17 +x^38* -7.0117617473585778251979936641602 E-18 +x^39* -4.8770242396322354595201016762943 E-18 +x^40*  2.7736370929525121569426683591711 E-19 +x^41*  6.0867216158279521431841056375529 E-19 +x^42*  5.4221678758070323009283591239994 E-20 +x^43* -6.1144436818779516533573016247449 E-20 +x^44* -1.7898429546152626958145407293999 E-20 +x^45*  3.9506967114524661122627881771652 E-21 +x^46*  3.3861201509178531139520486671443 E-21 +x^47*  1.5559234721851701604010643412265 E-22 +x^48* -5.0631042609964385159471991321737 E-22 +x^49* -1.1394185585099365557198284259904 E-22 +x^50*  6.2904948098171521570692828155407 E-23 +x^51*  2.5827247624986099858687811008930 E-23 +x^52* -6.2471872957876823012867141466869 E-24 +x^53* -4.3587354373751214441358203074525 E-24 +x^54*  3.8884884436503892161650861946872 E-25 +x^55*  6.0943046646291791424395564521200 E-25 +x^56*  1.8439958337928800380369839903195 E-26 +x^57* -7.0859034973425997345978358446834 E-26 +x^58* -1.1632853102487036751983700912011 E-26 +x^59*  6.2601315833494088275748746969203 E-27 +x^60*  2.5199563986304743008057629022862 E-27 +x^61* -2.2323646504238188509264319710687 E-28 +x^62* -4.0557631158394751876949181749196 E-28 +x^63* -6.4627958522674708557831244169246 E-29 +x^64*  5.3241698056956264676968949735094 E-29 +x^65*  2.0886771988617750086971635596413 E-29 +x^66* -5.5811667984264629192614685069187 E-30 +x^67* -4.0897559936926817027829612691371 E-30 +x^68*  3.8119737006650043902694847671351 E-31 +x^69*  6.4047286261518541864231624031981 E-31 +x^70*  1.1022926017021090185661192397339 E-32 } Gottfried Ultimate Fellow Posts: 764 Threads: 118 Joined: Aug 2007 12/15/2012, 06:37 AM (This post was last modified: 12/15/2012, 07:11 AM by Gottfried.) (12/14/2012, 07:16 PM)Gottfried Wrote: Here is some explanation in terms of Pari/GP-code, how the serial alternating iteration sums asum(x) can be expressed/computed with the help of power series.Just a short, but useful addendum: if we assume identity of the serial comnputation of asum(x) via the Pari/GP-sumalt-procedure and that via the Neumann-series-matrices asum_mat(x) and its power series then one should consider to use that second method as its standard basis. I toyed a bit around with differentating and integrating using the asum(x) and found, that the asum_mat(x) needs only about 1/20 of the computation time, so my example integral needed 80 000 msec with the serial implementation of the asum(x) but only 4 000 msec using the power-series implementation. This should also be useful for the computation of the inverse of asum(x) as long as we need to interpolate it by binary search/Newton-method, where many function calls are needed. I think moreover, that this shall prove useful, once we shall step further to analytically continue the range for the x and for the base b outside the "safe intervals" and enter the realms of truly divergent series for the asum(x). Gottfried Additional readings: An early(2008 ) discussion of this method and some of the problems, which we seemingly can resolve now, but also a (very natural) view into regions of bases outside the Euler-summable range for the serial computation of the asum(x) is here http://go.helms-net.de/math/tetdocs/Tetr...roblem.pdf An involved discussion (2007) about the ability of the Neumann-type matrix for the asum(x) to represent an analytical continuation for the divergent cases - the matrix-ansatz was crosschecked against a shanks-summation in the range, where the shanks-summation was computable: http://go.helms-net.de/math/tetdocs/Iter...tion_1.htm Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 764 Threads: 118 Joined: Aug 2007 12/15/2012, 06:49 AM (12/15/2012, 04:47 AM)sheldonison Wrote: Gottfried, This new superfunction is intriguing to me, so I generated a Taylor series for it. This alternating sum function seems to be clearly a different solution than anything we've seen before, and it is analytic in the complex plane. Hi Sheldon - this sounds really great; at the moment (saturday morning, 7 o'clock) it's a bit over my head, but I'll come back to it later. Cool, that you already did the view into the complex plane, that is all much curious... What about the cosine instead of the sine? You introduced the cosine in the formula: is there some optimization argument for it? I think it shifts the height-zero-definition from $x_0$ to the $x_{0.5}$ -position and must be compensated elsewhere in the formulae? Gottfried Gottfried Helms, Kassel sheldonison Long Time Fellow Posts: 640 Threads: 22 Joined: Oct 2008 12/15/2012, 01:43 PM (This post was last modified: 12/15/2012, 02:25 PM by sheldonison.) (12/15/2012, 06:49 AM)Gottfried Wrote: Hi Sheldon - this sounds really great ... What about the cosine instead of the sine? You introduced the cosine in the formula: is there some optimization argument for it? I think it shifts the height-zero-definition from $x_0$ to the $x_{0.5}$ -position and must be compensated elsewhere in the formulae? GottfriedHey Gottfried, I think using cosine vs sine is completely arbitrary. $\text{asum}(\alpha^{-1}(z))$ has a period of 2, so to switch from cosine to sine requires a shift of -0.5. To generate the $\alpha^{-1}$ function, I needed to empirically calculate a very accurate maximum amplitude for your asum function, so I think that's when I started centering on the cosine(z) maximum amplitude of asum, as opposed to the sin(z) zero crossing of asum. I also noticed the maximum occurs approximately at the midway point between the two fixed points, so I kept working with cosine after that. Here is the equivalent series, shifted by -0.5, for the $\text{asum}(\alpha^{-1}(z))=\text{amplitude}\times\sin(z)$. If you want the series centered anywhere else, +/-2.35i in the complex plane, I can provide that too, with equivalent accuracy. Code:{asumsine=         5.5313086758882852195782696632367 +x^ 1* -2.0430674012650642389618054781319 +x^ 2* -0.37225775792009989386677189320438 +x^ 3*  0.099658659136533363777013664824123 +x^ 4*  0.056164249815831421690267039207707 +x^ 5*  0.0024341739796340503006223611990660 +x^ 6* -0.0048821136236917176881522235140457 +x^ 7* -0.0013625033138659058581259860969190 +x^ 8*  0.00013860229935952324746169580217982 +x^ 9*  0.00016031197013218517260959597928241 +x^10*  0.000026233716766403365591594754596210 +x^11* -0.0000078397741621291476048490513062752 +x^12* -0.0000041654570665036359076289439137305 +x^13* -0.00000043429845750234876856499448438055 +x^14*  0.00000020220255943526951948556412163547 +x^15*  0.00000010469795440929394604162385515811 +x^16*  0.000000022104640668047825697063550269138 +x^17* -0.0000000050752778367022374662152498065127 +x^18* -0.0000000054268147119071794970623507258578 +x^19* -0.00000000088377028240205109643729838693901 +x^20*  0.00000000053724808383268656860483599781710 +x^21*  2.2695303649482206760147509826254 E-10 +x^22* -2.0755300318667016920518969613493 E-11 +x^23* -2.8606106192156205211842427520580 E-11 +x^24* -2.4115669661617778343439526060037 E-12 +x^25*  2.4097061226690732664291635948851 E-12 +x^26*  5.3024239886580580464040016659410 E-13 +x^27* -1.3352860389778966943126997119541 E-13 +x^28* -4.4676066586863015132548102097061 E-14 +x^29*  5.6825914165455658434390405670152 E-15 +x^30* -4.8592335192576129248775913955152 E-16 +x^31* -1.2100990011140146543285064592901 E-15 +x^32*  7.6111952883565188094575760424171 E-16 +x^33*  4.0350523000744874273361378890307 E-16 +x^34* -1.2914109236182899564695839116964 E-16 +x^35* -8.9756371000000830488253273137664 E-17 +x^36*  1.1989065889716564053858711596068 E-17 +x^37*  1.4933359276677119637947130427580 E-17 +x^38* -2.6806360741610229880740798420225 E-19 +x^39* -2.0268972881933589191981413633055 E-18 +x^40* -1.2667505083540508761525831833105 E-19 +x^41*  2.3992792073090946823641233544269 E-19 +x^42*  2.7280436906032629343367213777381 E-20 +x^43* -2.7155583649715904622682398258307 E-20 +x^44* -3.3406372025307868881770171768099 E-21 +x^45*  3.3583042746061851597680641775725 E-21 +x^46*  2.8315393198749754587922156142229 E-22 +x^47* -4.9201511971234663439574586786040 E-22 +x^48* -1.9937960921207882784670984182728 E-23 +x^49*  7.9867782170632207111309500434223 E-23 +x^50*  2.8277790598852084735133034329599 E-24 +x^51* -1.2866120668308479467449444353412 E-23 +x^52* -8.0009133087978621182694530691203 E-25 +x^53*  1.9452214442357667538533694067711 E-24 +x^54*  1.9749461280904030199079122668892 E-25 +x^55* -2.7421549570850430614725716263261 E-25 +x^56* -3.8853147981092745620598414279056 E-26 +x^57*  3.6832634896487287902937856325878 E-26 +x^58*  6.4643562945915383055244186623733 E-27 +x^59* -4.8954244803017602327236947884728 E-27 +x^60* -9.6822083644359538571851444600130 E-28 +x^61*  6.6905004127327106773176137800281 E-28 +x^62*  1.3903608890787339354290348605861 E-28 +x^63* -9.5633563001034297889480127198725 E-29 +x^64* -2.0299634580180042177140372334119 E-29 +x^65*  1.4105236281490162329783786025153 E-29 +x^66*  3.1114882040622484025316768678442 E-30 +x^67* -2.0873558246659861647640896462484 E-30 +x^68* -4.9639057430754083874748766760233 E-31 +x^69*  3.0323328554893664432512738086894 E-31 +x^70*  7.9943649080493773011295509130754 E-32 +x^71* -4.2928725325635863410096873706367 E-32 +x^72* -1.2676013386130031721900449722132 E-32 } tommy1729 Ultimate Fellow Posts: 1,370 Threads: 335 Joined: Feb 2009 12/15/2012, 10:06 PM I agree with sheldon. regards tommy1729 Gottfried Ultimate Fellow Posts: 764 Threads: 118 Joined: Aug 2007 12/15/2012, 11:27 PM (12/15/2012, 01:43 PM)sheldonison Wrote: Hey Gottfried, I think using cosine vs sine is completely arbitrary. Hi Sheldon - Ok, it might have been the case that there was some advantage that I had not seen ... For me the best option seemed to be the sine, because I feel that norming x0 where asum(x0)=0 has some flair of natural-ity, and then to search left and right from there to the extrema to find the half-iterates alludes much to a sine wave. One day I must learn to understand the cauchy-integral and riemann-mapping subjects - perhaps I can ask you another day for some tutoring? Most often I need only the first key-step into a matter and can then proceed on my own. (However, at the moment it could be I don't get my head free enough, the two classes need some attention and special preparation before the break around christmas and new year, but let's see) So far only for a short response - Gottfried Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 764 Threads: 118 Joined: Aug 2007 12/16/2012, 12:05 PM (12/15/2012, 01:43 PM)sheldonison Wrote: Code:{asumsine(x)=   5.5313086758882852195782696632367 +x^ 1* -2.0430674012650642389618054781319 +x^ 2* -0.37225775792009989386677189320438 +x^ 3*  0.099658659136533363777013664824123 (...)Hi Sheldon - I just crosschecked your function with my asum_hgh(x)-function: Code:h1 = 0.5     x1 = asumsine(h1) h_chk = asumhgh(x1)+1  \\ I'd centered h0 to be one period distant from asumsine(0) h1 - h_chk %1290 = 0E-125 perfect match! Extremely cool :-) It's sunday, I'll come back to this perhaps in the evening. Thanks so far! Gottfried Gottfried Helms, Kassel sheldonison Long Time Fellow Posts: 640 Threads: 22 Joined: Oct 2008 12/17/2012, 10:19 PM (This post was last modified: 12/17/2012, 11:01 PM by sheldonison.) I decided to compare the asum function for tetration base sqrt(2), to the bummer thread on this forum. $\sqrt{2}^{z}(3)$ has an upper fixed point of 4 and a lower fixed point of 2. You can compare this image to the last image in the post I linked to. The base(sqrt(2)) has been discussed a great deal on this forum, so I thought it would be a good comparison study. The differences between these functions are extremely tiny, so the differences are scaled by 10^25 so as to fit on the same plot which shows a little bit of the superfunction going from the upper fixed point of 4 down towards the lower fixed point of 2, with the graph centered at 3.     Once again, the asum super function function is not an average of the superfunctions from the two fixed points. As you can see from the graph, the asum superfunction is much closer to the upper fixed point superfunction, which is entire, than to the lower fixed point superfunction, which is also imaginary periodic in the complex plane, but has logarithmic singularities. At the real axis near the neighborhood of 3, the asum itself has a very small amplitude, of approximately 7.66E-12, as compared to an amplitude of 0.012 for Gottfried's asum(1.3^z-1) example. Also, the asum needs a lot more iterations for the sqrt(2) base than for Gottfried's example because of slower convergence towards the fixed points. If you take the asum of the upper fixed point superfunction, than the asum has a period of 2, and the third harmonic harmonic has a magnitude of 1.186E-36. The even harmonics of the asum of the superfunction from the fixed point are zero since by definition asum(z)=-asum(z+1), and this is only true for odd harmonics, not odd harmonics. As imag(z) increases towards >8.5*I, the differences between these three functions becomes larger and large in amplitude, becoming macroscopic instead of microscopic. The difference increase with a scaling factor of $\exp(2\pi \Im z)$. At some point, the higher harmonics become visible too, and one expects the three functions to diverge once the upper fixed point superfunction is no longer converging towards 2 as real of z increases. Eventually the asum superfunction is probably dominated by singulariies; but I conjecture the asum superfunction is not imaginary periodic like the two fixed point superfunctions. - Sheldon « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post Perhaps a new series for log^0.5(x) Gottfried 3 366 03/21/2020, 08:28 AM Last Post: Daniel Half-iterates and periodic stuff , my mod method [2019] tommy1729 0 480 09/09/2019, 10:55 PM Last Post: tommy1729 Approximation to half-iterate by high indexed natural iterates (base on ShlThrb) Gottfried 1 715 09/09/2019, 10:50 PM Last Post: tommy1729 Taylor series of i[x] Xorter 12 12,612 02/20/2018, 09:55 PM Last Post: Xorter Does tetration take the right half plane to itself? JmsNxn 7 6,669 05/16/2017, 08:46 PM Last Post: JmsNxn Half-iteration of x^(n^2) + 1 tommy1729 3 4,352 03/09/2017, 10:02 PM Last Post: Xorter Dynamics as alternating waves tommy1729 2 2,493 02/13/2017, 01:01 AM Last Post: tommy1729 Uniqueness of half-iterate of exp(x) ? tommy1729 14 15,898 01/09/2017, 02:41 AM Last Post: Gottfried Complaining about MSE ; attitude against tetration and iteration series ! tommy1729 0 1,724 12/26/2016, 03:01 AM Last Post: tommy1729 2 fixpoints , 1 period --> method of iteration series tommy1729 0 1,781 12/21/2016, 01:27 PM Last Post: tommy1729

Users browsing this thread: 1 Guest(s)