[AIS] (alternating) Iteration series: Half-iterate using the AIS?
#1
I'm still looking at the whereabouts of series of iterates, like \( s(x)=x-b^x+b^{b^x} -b^{b^{b^x} } + ... - ... \) and consider the potential of such series for the definition of half-iterates, independently of the selection of fixpoints and the fractional "regular tetration" and similar goodies. The best approach so far to make this compatible with criteria like summability (we cannot have convergence in the following definitions) seems to be the following ansatz.

We take a base, say b=1.3, and use the decremented exponentiation \( f:x \to b^x-1 \), such that we have two real fixpoints. Then we can select any abscissa x_0 between that fixpoints and iterate infinitely to positive heights and to negative heights as well, getting \( x_1,x_2,... \) and \( x_{-1},x_{-2},... \). The alternating series of all that iterates is then Cesaro- or Euler-summable to some finite value \( S(x) \).

Obviously this is periodic with \( S(f^{[2+h]}(x))=S(f^{[h]}(x)) \) and also sinusoidal. With that base b=1.3 I find for instance at \( x_0 \sim 0.427734366938 \) that \( S(x_0)=0 \).

After this it is surely natural to assume, that beginning at the first iterate we have also that \( S(x_1)=0 \), but it seems also natural to assume, that then the maxima or the minima of the sinusoidal curve of all \( S(x_0)...S(x_1) \) are at the half-iterates between them.

Well, this is the crucial assumtion for my discussion here, which must prove to be sensical. Now if I let Pari/GP search for the first extremum in that curve I get the abcissa \( x_{\min\text{(serial)}} \sim 0.2273401 546757 \) . If I compute the halfiterate using the regular tetration via the squareroot of the formal powerseries/the Schröder-function mechanism, I get \( x_{0.5 \small \text{(regular)}} \sim 0.2273401 704241 \) which is very close, but only to some leading digits. The values of the infinite series beginning at these values differ only by 1e-16 and smaller, so maybe the non-match is an artifact (which I do not believe).

Do you have any opinion about this or even any idea how to proceed to make it an interesing item (say we find some method where \( x_{0.5 \small \text{(yourmethod}) } \sim x_{\min( \small \text{serial})} \)) or simply - that it would be better to put this all aside for a good reason?

(I can provide the Pari/GP-routines if this would be convenient)

Gottfried

[update] : I adapted the title to improve the organization of the thrads-list
Gottfried Helms, Kassel
#2
(12/06/2012, 12:10 AM)Gottfried Wrote: Obviously this is periodic with \( S(f^{[2+h]}(x))=S(f^{[h]}(x)) \) and also sinusoidal. With that base b=1.3 I find for instance at \( x_0 \sim 0.427734366938 \) that \( S(x_0)=0 \).

After this it is surely natural to assume, that beginning at the first iterate we have also that \( S(x_1)=0 \), but it seems also natural to assume, that then the maxima or the minima of the sinusoidal curve of all \( S(x_0)...S(x_1) \) are at the half-iterates between them.

The problem with this is its an " not even wrong " statement.

Just like one could use ALMOST ANY 1-periodic real function to make a new superexp.

Gottfried assumed that they - the extrema - must be half-iterates ... but half-iterates based upon which method ?
Afterall , all methods agree on what an integer iteration is , but most disagree about what a half-iterate is. Not to mention requirements such as analytic and properties such as multiple fixpoints.

So without properties or uniqueness criteria , it is just a half iterate by its OWN method.
Notice \( s(x)/2 \) also satisfies \( F(b^x-1) = -F(x) \) , so the *half* is a "not even wrong" statement and even its value is a "not even wrong" statement.

The disease seems randomness without proven uniqueness or properties.

notice F(x) = F(A(x)) has the general solution F(x) = B( ...+ C(D(x)) + C(A(D(x))) + ...) For an uncountable amount of functions B,C and D.
In fact all solutions are like that. Working with G(inversesuperA(x)) for a periodic suitable G is just an equivalent form.

Although different viewpoint are intresting and this is an intresting question , I do not see a big solution in this for now.
Since Gottfried's example is about a function with 2 fixpoints it means its a solution that is not both analytic and satisfies the functional equation everywhere -- since such a function does not exist --.

Besides we - if I may say we - are mainly looking for superfunction of functions that do not have 2 distinct real fixpoints but rather no real fixpoints ( such as exp(x)).

Also I do not think summability methods preserve properties such as simultaneously being analytic and satisfying the functional equation ... apart from perhaps a finite few trivial exceptions.

There are many uniqueness criterions made up for tetration so it might be possible to find F(x) = B( ...+ C(D(x)) + C(z^(D(x))) + ...) such that these hold.

Otherwise at the moment I see no potential.
#3
Thank you for your consideration of my proposal!

(12/08/2012, 06:27 PM)tommy1729 Wrote: Gottfried assumed that they - the extrema - must be half-iterates ... but half-iterates based upon which method ?


:-) - just based upon this method... : this method is just designed to define a half-iterate at a first hand.
Consequently we have the "uniqueness criterion" that it is just compatible with the alternating iteration series (call it "\( asum(x) \)" in the following ...

... where the half iterates are defined to be at the extrema of the \( asum(x) \).
The idea to generalize it to continuous real height is to define any real height via the inverse sinus function: we measure the \( asum(x) \) at each x, and we find one \( x_k \) where \( asum(x_k)=0 \). Because \( asum(x) \) is periodic with two iterations of \( b^x-1 \) we can choose a convenient *x* to be \( x_0 \) for which we define the height *h* to be zero. Then in the near we find one extremum of \( asum(x) \), where the *y*-value is the general amplitude for that periodicity. Then we measure each \( asum(x) \) for \( x=x_0 \) to \( x=x_2=b^{b^{x_0}-1}-1 \) and define the height to be the arcsine of \( {asum(x) \over amplitude } \) thus

\( h_x = {\sin^{-1}{asum(x) \over amplitude} \over \pi } \)

I hope that some advantage of this definition over some other "naive" methods and 2-periodic (but otherways arbitrary) functions shall come out from the fact, that the resulting curve of the relation from x to \( h_x \) is smooth and the relation from \( h_x \) to \( asum(x_h) \) is a perfect sine-curve where we could compute x from h using the \( \sin \) and\( asum^{-1}(x) \) if...
... yes, if we had also an explicite inverse of the \( asum(x) \)-function. Currently I'm replacing that missing inverse by a binary search/newton-interpolation in the same way as Andy Robbins does it with his own sexp()-method, but that Newton method is not safe near the half-iterates where the derivative is near zero. I'm just investigating to find a better interpolation there.


It is, btw., very near to the "regular decremented exponentiation" in the development at the attracting fixpoint 0 (I've not yet checked the difference to the regular dxp at the repelling fixpoint, may be the asum-method defined here is for instance in the middle of that two methods). At the example-base b=1.3 the difference to the regular dxp at fractional heights is smaller than, say 1e-6.


Quote: Afterall , all methods agree on what an integer iteration is , but most disagree about what a half-iterate is. Not to mention requirements such as analytic and properties such as multiple fixpoints.
The comment on the half iterate is above; the analycity is a delicate point. I'll look at it if I'll have a full working implementation.


Quote:So without properties or uniqueness criteria , it is just a half iterate by its OWN method.


Yes, this is a OWN method, and thus are its interpolated fractional heights/function values. Surely the method can only then be more than a fancy game if it has some interesting relations of its fractional iteration-definition to other places in number theory.

Quote:... no potential.
Well, thanks anyway.

[hline]
The picture shows the relation of x_h and h for the base b=1.3. The upper (repelling) fixpoint is at about x=8.1, asymptotic to infinite negative height h (or: beyond the left side of the picture), the lower (attacting) fixpoint is at x=0 asymptotic to infinite positive height (or: beyond the right side of the picture)


Attached Files Thumbnail(s)
   
Gottfried Helms, Kassel
#4
Hmm, that is a surprise.
I just checked the vague impression, that the asum(x) is just "in the middle" of the of the matrix-based versions of that alternating sum, when I use the fixpoint-0-Bell-matrix for that half of the alternating sum, whose terms go to the fixpoint-0, and the fixpoint-1-Bell-matrix for the other half of the alternating sum (whose terms go to the fixpoint 1).

Bingo!

It's so simple as this. So I can determine the asum(x) using the matrix-method (=Neumannseries of Carlemanmatrix) as well, which seemed to be impossible before.
I don't yet know what it means, that the interpolation to fractional heights by this method involves both real fixpoints - perhaps this defines a meaningful alternative to the situation in the regular tetration, where we have two different and seemingly unrelated fractional heights depending on the selection of the fixpoint.



But another aspect might be more interesting: the matrix-based method seems to be able to analytically continue that alternating iteration series to bases outside the Euler-range, as I'd discussed elsewhere already 2007 or 2008 (for a reminder: it was worked out that it is very likely that the matrix-method employed here is able to assign a reasonable value to iteration series of the type \( S(x) = x - e^x + e^{e^x} - e^{e^{e^x}} + \cdots \pm \cdots \) see http://go.helms-net.de/math/tetdocs/Iter...tion_1.htm ). So I've now to check whether the current discovery can also be used to extend the asum-based fractional tetration to other bases...

other relevant posts in the forum:
http://math.eretrandre.org/tetrationforu...php?tid=38
http://math.eretrandre.org/tetrationforu...hp?tid=692

[update 31.3.2015] link to the discussion of the analytic continuation on my tetration-pages added
Gottfried Helms, Kassel
#5
(12/06/2012, 12:10 AM)Gottfried Wrote: I'm still looking at the whereabouts of series of iterates, like \( s(x)=x-b^x+b^{b^x} -b^{b^{b^x} } + ... - ... \) and consider the potential of such series for the definition of half-iterates, independently of the selection of fixpoints and the fractional "regular tetration" and similar goodies. The best approach so far to make this compatible with criteria like summability (we cannot have convergence in the following definitions) seems to be the following ansatz.

We take a base, say b=1.3, and use the decremented exponentiation \( f:x \to b^x-1 \), such that we have two real fixpoints. Then we can select any abscissa x_0 between that fixpoints and iterate infinitely to positive heights and to negative heights as well, getting \( x_1,x_2,... \) and \( x_{-1},x_{-2},... \). The alternating series of all that iterates is then Cesaro- or Euler-summable to some finite value \( S(x) \).

Obviously this is periodic with \( S(f^{[2+h]}(x))=S(f^{[h]}(x)) \) and also sinusoidal. With that base b=1.3 I find for instance at \( x_0 \sim 0.427734366938 \) that \( S(x_0)=0 \).

After this it is surely natural to assume, that beginning at the first iterate we have also that \( S(x_1)=0 \), but it seems also natural to assume, that then the maxima or the minima of the sinusoidal curve of all \( S(x_0)...S(x_1) \) are at the half-iterates between them.

Well, this is the crucial assumtion for my discussion here, which must prove to be sensical. Now if I let Pari/GP search for the first extremum in that curve I get the abcissa \( x_{min(serial)} \sim 0.2273401 546757 \) . If I compute the halfiterate using the regular tetration via the squareroot of the formal powerseries/the Schröder-function mechanism, I get \( x_{0.5(regular)} \sim 0.2273401 704241 \) which is very close, but only to some leading digits. The values of the infinite series beginning at these values differ only by 1e-16 and smaller, so maybe the non-match is an artifact (which I do not believe).

Do you have any opinion about this or even any idea how to proceed to make it an interesing item (say we find some method where \( x_{0.5(your_method)} \sim x_{min(serial)} \)) or simply - that it would be better to put this all aside for a good reason?

(I can provide the Pari/GP-routines if this would be convenient)

Gottfried

I'm interested in your method, but I can't seem to reproduce the results. I tried using a series of positive and negative iterates like

\( S(x)\ \stackrel{\mathrm{pseudo}}{=}\ \cdots - f^{-3}(x) + f^{-2}(x) - f^{-1}(x) + x - f(x) + f^2(x) - f^3(x) + \cdots \)

(where the "pseudo-equality" implies that we are considering equality via divergent summation as opposed to a "true" equality of a convergent series)

and summing via Cesaro summation by averaging the partial sums

\( P_1(x) = x \)
\( P_2(x) = -f^{-1}(x) + x - f(x) \)
\( P_3(x) = f^{-2}(x) - f^{-1}(x) + x - f(x) + f^2(x) \)
...

all where \( f(x) = b^x - 1 \) and \( f^{-1}(x) = \log_b(x + 1) \) for \( b = 1.3 \), but it seems to approach 0 or something close to it. Now, this is not strictly wrong since \( S(x) = 0 \) would satisfy \( S(f^{h+2}(x)) = S(f^h(x)) \), but it's not what we want. I've tried other ways of arranging the partial sums (like Cesaro-summing the positive-"degree" terms and the negative-"degree" terms separately), but it doesn't seem to help. Do I need to use Euler summation?
#6
(12/10/2012, 03:33 AM)mike3 Wrote: (...)
I've tried other ways of arranging the partial sums (like Cesaro-summing the positive-"degree" terms and the negative-"degree" terms separately), but it doesn't seem to help. Do I need to use Euler summation?

Hi Mike, I did not check Cesaro-summability; I just used the sumalt-procedure in Pari/GP:

Code:
fmt(200,12)   \\ set internal float precision to 200 and display digits to 12
                    \\                   (by user defined function)

tb=1.3 \\ set the exponential base in global variable
tl = log(tb) \\ set the log of the base

\\ procedure to allow sequential access to consecutive iterates via sumalt()
\\ nextx(x,0) - initializes glbx
\\ nextx(x,h) - if h>0 gives the next iterate towards the attracting fixpoint
\\ nextx(x,h) - if h<0 gives the next iterate towards the repelling fixpoint

      glx=0   \\ global x-variable
nextx(x,h=1)=if(h==0,glx=x,if(h>0,glx=exp(glx*tl)-1,glx=log(1+glx)/tl));return(glx)
    \\   nextx(1.5, 0)  \\ example call

\\ == procedure for doubly infinite alternating iteration series beginning at x
{asum(x)=local(su0,su1,su);
     su0  = sumalt(k=0,(-1)^k*nextx(x,k));
     su1  = sumalt(k=0,(-1)^k*nextx(x,-k));
     su   = su0+su1-x;
return(su); }
    \\ asum(0.6) \\ example

[update]: Even more simple: the alternating sum towards the fixpoint fp0=0 converges; but also the alternating sum towards the upper fixpoint fp1 can be made by separating the convergent sum of the \( \pm (x_{-h} - fp1) \) and then add the half of the fixpoint (the dirichlet's eta at 0 is 0.5):
PHP Code:
sum(h=0,100,(-1)^h*(nextx(1.4,-h) -fp1 ) )  + 0.5fp1 

(take a meaningful upper limit for the sum instead of 100)
Gottfried Helms, Kassel
#7
(12/10/2012, 10:38 AM)Gottfried Wrote:
(12/10/2012, 03:33 AM)mike3 Wrote: (...)
I've tried other ways of arranging the partial sums (like Cesaro-summing the positive-"degree" terms and the negative-"degree" terms separately), but it doesn't seem to help. Do I need to use Euler summation?

Hi Mike, I did not check Cesaro-summability; I just used the sumalt-procedure in Pari/GP:

Code:
fmt(200,12)   \\ set internal float precision to 200 and display digits to 12
                    \\                   (by user defined function)

tb=1.3 \\ set the exponential base in global variable
tl = log(tb) \\ set the log of the base

\\ procedure to allow sequential access to consecutive iterates via sumalt()
\\ nextx(x,0) - initializes glbx
\\ nextx(x,h) - if h>0 gives the next iterate towards the attracting fixpoint
\\ nextx(x,h) - if h<0 gives the next iterate towards the repelling fixpoint

      glx=0   \\ global x-variable
nextx(x,h=1)=if(h==0,glx=x,if(h>0,glx=exp(glx*tl)-1,glx=log(1+glx)/tl));return(glx)
    \\   nextx(1.5, 0)  \\ example call

\\ == procedure for doubly infinite alternating iteration series beginning at x
{asum(x)=local(su0,su1,su);
     su0  = sumalt(k=0,(-1)^k*nextx(x,k));
     su1  = sumalt(k=0,(-1)^k*nextx(x,-k));
     su   = su0+su1-x;
return(su); }
    \\ asum(0.6) \\ example

[update]: Even more simple: the alternating sum towards the fixpoint fp0=0 converges; but also the alternating sum towards the upper fixpoint fp1 can be made by separating the convergent sum of the \( \pm (x_{-h} - fp1) \) and then add the half of the fixpoint (the dirichlet's eta at 0 is 0.5):
PHP Code:
sum(h=0,100,(-1)^h*(nextx(1.4,-h) -fp1 ) )  + 0.5fp1 

(take a meaningful upper limit for the sum instead of 100)

Ah... now I see, it seems I wasn't using enough terms in the Cesaro summation. But dang, this thing is close to 0. (EDIT: Ah! I see the scale in your graph... I didn't see that second scale on the right hand side -- guess I'm all OK here with this, now.)

Using the sumalt, or even better convergent summation, seems to yield the same values, so I guess it's rescued.
#8
(12/06/2012, 12:10 AM)Gottfried Wrote: ....
We take a base, say b=1.3, and use the decremented exponentiation \( f:x \to b^x-1 \), such that we have two real fixpoints....
... If I compute the halfiterate using the regular tetration via the squareroot of the formal powerseries/the Schröder-function mechanism, I get \( x_{0.5(regular)} \sim 0.2273401 704241 \) which is very close, but only to some leading digits. The values of the infinite series beginning at these values differ only by 1e-16 and smaller, so maybe the non-match is an artifact (which I do not believe).
When analyzing iterated exponentiation for a base less than eta, and looking at the behavior between the two fixed points, there is inherent ambiguity, which may or may not be relevant to Gottfried's current example.

Working with b=1.3 for decremented exponentiation, \( f:x \to b^x-1 \), is conjugate to working with tetration for base c=1.3^(1/1.3)=1.22362610172251... \( g:x \to c^x \), where \( f^{[z]} = \frac{g^{[z]}}{1.3}-1 \). So this problem is exactly analogous to looking at tetration base c, \( g:x \to c^x \), which has two fixed points, 1.3 and 12.52457261... \( f:x \to b^x-1 \), also has two fixed points, which are translated by the formula I gave, \( f^{[z]} = \frac{g^{[z]}}{1.3}-1 \), to 0, and 8.634286622...

So then this is tetration for base c, which is less than eta, and it seems similar to Henryk's and Dimitrii's paper on base sqrt(2), which also has two real fixed points. For Gottrfried's current example, we would expect the two real fixed points to lead to slightly different superfunctions, depending on which fixed point is used to generate the Schroeder function. Both superfunctions can be defined to be real valued at the real axis, with values ranging from 0 to 8.634286... but the two have different imaginary periods in the complex plane. From the lower fixed point of zero, with period=4.695878i, we expect a logarithmic singularity, and from the upper fixed point with period=6.775735i, we get super exponential growth. I was curious which of the two superfunctions Gottfried's current algorithm was closer too. It is also possible to have a hybrid of these two superfunctions, which I posted on here, http://math.eretrandre.org/tetrationforu...hp?tid=515.
- Sheldon

#9
(12/11/2012, 12:44 AM)sheldonison Wrote: (...) I was curious which of the two superfunctions Gottfried's current algorithm was closer too. It is also possible to have a hybrid of these two superfunctions, which I posted on here, http://math.eretrandre.org/tetrationforu...hp?tid=515.
- Sheldon

Hi Sheldon -

what I found using the Bell-matrices for the lower and the upper fixpoint is, that the current method combines the results for the alternating iteration series developed at that fixpoints.
Say, the Bellmatrix B0 for the fixpoint-development at fp0=0 gives using \( AS_0 = (I + B_0)^{-1} \) the coefficients for a power series to determine the alternating iteration series beginning at x0 towards fp0 , let's call it asp(x) , correctly. But not towards the other direction (!). On the other hand, the Bellmatrix B1 for the fixpoint-development at the higher fixpoint fp1 gives using \( AS_1 = (I + B_1^{-1})^{-1} \) the coefficients for a power series to determine the alternating iteration series beginning at x0 towards fp1 , call it asn(x) .
Then the empirical observation that indeed asp(x)+asn(x)-x = asum(x) shows, that the asum(x) (which is taken by the two-way infinite iteration series ignoring the Bell-matrices with their fixpoint-specificity) is sort of "hybrid", ... and the fractional iteration based on it should then as well be taken as "hybrid" of the developments at the two different fixpoints (where one should explicite the concrete meaning of the "hybrid"ity into a formalized algebraic description).

Perhaps it would be fruitful to explicate this much more. Possibly I can provide some examples tomorrow. Also, if someone needs ready-made Pari/GP-procedures to check this him/herself, I can provide my current function-definitions.

Gottfried
Gottfried Helms, Kassel
#10
(12/11/2012, 01:16 AM)Gottfried Wrote: Hi Sheldon -

... Then the empirical observation that (it) is sort of "hybrid", ... and the fractional iteration based on it should then as well be taken as "hybrid" of the developments at the two different fixpoints (where one should explicite the concrete meaning of the "hybrid"ity into a formalized algebraic description).
Thanks Gottfried. I think its an interesting summation, and probably leads to an analytic abel function via, \( \alpha(z)=\frac{1}{\pi}\sin^{-1}(\frac{\text{asum}(z)}{\text{amplitude}}) \). My hunch is that the superfunction, \( \alpha^{-1}(z) \), would have fractal singularities, as imag(z) increases approaching the Period/2 for the lower fixed point, but that is only a pure intuitive hunch, not based on any data or numerical results.
- Sheldon


Possibly Related Threads…
Thread Author Replies Views Last Post
  Divergent Series and Analytical Continuation (LONG post) Caleb 54 13,284 03/18/2023, 04:05 AM
Last Post: JmsNxn
  Discussion on "tetra-eta-series" (2007) in MO Gottfried 40 9,826 02/22/2023, 08:58 PM
Last Post: tommy1729
  Half-iterate exp(z)-1: hypothese on growth of coefficients Gottfried 48 16,012 09/09/2022, 12:24 AM
Last Post: tommy1729
Question Tetration Asymptotic Series Catullus 18 6,441 07/05/2022, 01:29 AM
Last Post: JmsNxn
Question Formula for the Taylor Series for Tetration Catullus 8 4,462 06/12/2022, 07:32 AM
Last Post: JmsNxn
  Fractional iteration of x^2+1 at infinity and fractional iteration of exp bo198214 17 36,397 06/11/2022, 12:24 PM
Last Post: tommy1729
  Calculating the residues of \(\beta\); Laurent series; and Mittag-Leffler JmsNxn 0 1,295 10/29/2021, 11:44 PM
Last Post: JmsNxn
  Trying to find a fast converging series of normalization constants; plus a recap JmsNxn 0 1,235 10/26/2021, 02:12 AM
Last Post: JmsNxn
  Why the beta-method is non-zero in the upper half plane JmsNxn 0 1,420 09/01/2021, 01:57 AM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 2,611 07/22/2021, 03:37 AM
Last Post: JmsNxn



Users browsing this thread: 3 Guest(s)