[AIS] (alternating) Iteration series: Half-iterate using the AIS?
#21
(12/15/2012, 06:37 AM)Gottfried Wrote: I think moreover, that this shall prove useful, once we shall step further to analytically continue the range for the x and for the base b outside the "safe intervals" and enter the realms of truly divergent series for the asum(x).

Smile

Thanks for the links, I remember reading about your alternating series a while ago, and forgot where it went! I also remember having some interesting thoughts last time, but can't remember what they were. I will re-read them and see if I can make some comments. I feel in the mood for tetration this season!

Smile

Andrew Robbins
#22
(12/17/2012, 10:19 PM)sheldonison Wrote: At the real axis near the neighborhood of 3, the asum itself has a very small amplitude, of approximately 7.66E-12, as compared to an amplitude of 0.012 for Gottfried's asum(1.3^z-1) example.
Here is an earlier plot for the decreasing of the amplitude as the base b goes to 2:

!    
Gottfried Helms, Kassel
#23
(12/18/2012, 11:17 AM)Gottfried Wrote:
(12/17/2012, 10:19 PM)sheldonison Wrote: At the real axis near the neighborhood of 3, the asum itself has a very small amplitude, of approximately 7.66E-12, as compared to an amplitude of 0.012 for Gottfried's asum(1.3^z-1) example. typo: off by 2x, the amplitude is approximately 1.533E-11
Here is an earlier plot for the decreasing of the amplitude as the base b goes to 2:
I have an equation for the approximate log(amplitude), for the alternating sum, iterating b^z-1. The approximation works for bases<e, and becomes more accurate as the base gets closer to e, with the adjust factor seeming to have a value of about 1.685 (log base e).
\( \log(\text{amplitude})=\frac{\pi^2}{\log(\log(b))}+\text{constant} \)edit, I shouldn't have used "constant". adjust(b) seems ok ....
\( \log(\text{amplitude})=\frac{\pi^2}{\log(\log(b))}+\text{adjust}(b) \)

First, I reproduced Gottfried's graph,
    ,
And then I extended the graph to b=2.65, with the graph here shown to b=2.5, with both the calculated values and the approximation.
   

The key to the approximation, and calculating the asum for such tiny amplitudes lies in the complex plane. I prefer to work with the upper fixed point because it is entire, rather than the lower fixed point. The following Fourier series works for \( f^{[z]}(m) \), where \( f(z)=b^z-1 \) is the decremented exponential, m is the midpoint between the upper fixed point and the lower fixed point of zero, and the superfunction \( f^{[z]}(m) \) was generated from the upper fixed point.

\( \text{asum}(f^{[z]}(m)) = \sum_{n=1,3,5,7...} a_n\exp(n\pi i z) + \overline{a_n}\exp(-n\pi i z) \)

At the real axis, this is a 2-periodic real valued fourier series, and only has odd terms in the series, since asum(z)=-asum(z+1). The asum may have very tiny values at the real axis, as the example for b=2 shows, but it grows exponentially as imag(z) increases or decreases away from the real axis. This graph is for b=2, with the upper fixed point=1, and m=0.5. The graph is the asum of the upper fixed point superfunction, generated at 1/2 the period from the lower fixed point where \( \Im(z)=\frac{-\pi i}{\log(\log(2))}\approx8.572i \). On the left is the asum(z), and on the right is the superfunction(z), \( f^{[z-\pi i/\log(\log(2))]}(0.5) \). Real is graphed in red, imaginary is in green.

   

The ideal asum(z)=amplitude*cos(pi z) which grows exponentially as imaginary of z increases. \( \cos(\pi z)=0.5\times(\exp(\pi iz) + \exp(\pi -iz)) \) If it has some defined limiting behavior as the base approaches e, for \( \Im(z)=\frac{\pi i}{\log(\log(b))} \), then the value at the real axis must be scaled from that value by \( \exp(\frac{\pi^2}{\log(\log(b))}) \). That is where the approximation comes form. At base=e, the period is infinite, but for the upper fixed point, there is still a defined point where the upper fixed point best approximates the lower fixed point, see http://math.eretrandre.org/tetrationforu....php?fid=3. You can generate the asum of this function, which goes to 0 at both +/- real infinity. I'll post more when I calculate it.


One final plot. Eventually the \( f^z(0.5) \) ceases converges towards the upper fixed point and instead goes to infinity. This is a singularity for the asum(f^z), where f^z is generated from the upper fixed point. The analytic boundary of the asum, as expressed by the Fourier series, occurs when \( f^z=1+\frac{2\pi i}{\log(2)} \), because then f(z+1)=1, which is a the upper fixed point for f(z) since 2^1-1=1. Arbitrarily close to this point (but above it), is a point where f^(z+n) goes to infinity. It is fairly straightforward to calculate the value for z for this value, which occurs at 0.882829631453880483797+8.92366740700108685788i. This is about +0.35i bigger than the Period/2. Here is a graph of the asum(f(z)), at its analytic limit. Here, the asum is still continuous, but you can clearly see the singularity. On the left is the asum, and on the right is the superfunction, \( f^z \). Real is graphed in red, imaginary is in green.
   

Maybe I'll post more later. I also figured out some approximations for the singularities of the asum based superfunction, which eventually hits a wall of fractal singularities as |imag(z)| increases. The analytic limit for the asum based superfunction is a little bit less than the analytic limit of the asum from the upper fixed point, which was shown here.
- Sheldon



#24
Hi Sheldon -

your post shows really a great deal of work!
I'm currently not yet able to follow that all, I'm still concerned (and stuck) with a much more basic question about the possibility to define an optimized procedure to compute the derivatives using powerseries instead of the iterationseries itself. I'll describe this (and'll ask for help) in another post.
(12/22/2012, 04:12 PM)sheldonison Wrote: I have an equation for the approximate log(amplitude), for the alternating sum, iterating b^z-1. The approximation works for bases<e, and becomes more accurate as the base gets closer to e. The constant is about 0.73.
\( \log(\text{amplitude})=\frac{\pi^2}{\log(\log(b))}+\text{constant} \)

Well we can see the slight change of slope in the left of your reproduced graph for the amplitude. I suspected thus, that your proposed formula for the amplitude is too simple; simply look at the smaller bases. To improve the visual impression I took for instance base b=1.05 and got an amplitude of 4.388... Smaller bases give values again greater in absolute value, however changing in sign (possibly reflecting the "phase"-graph in my older article, I'll see to extract it and put it here for the current discussion).

I'm extremely triggered because of your comments concerning the complex-valued arguments and various bases - I'll look at it more deeply after I got my derivative problem solved/optimized: I'm not yet able to express the computation of the derivatives d asum(x) /dx in terms of my (Bell-)matrix ansatz to speed up computations (and complete the matrix-formalism). I'll insert such a posting later...

Gottfried
Gottfried Helms, Kassel
#25
(12/23/2012, 09:01 AM)Gottfried Wrote: Well we can see the slight change of slope in the left of your reproduced graph for the amplitude. I suspected thus, that your proposed formula for the amplitude is too simple; simply look at the smaller bases.
Hey Gottfried,

I should've used approximate. I've since changed the equation to put an adjust(b) term in there. The graph showing the estimate along aside the actual shows the adjust(b) required, although it is base 10. Here the adjust term is base e.
\( \log(\text{amplitude})=\frac{\pi^2}{\log(\log(b))}+\text{adjust}(b) \)

Below there is a graph, showing the required adjust(b) term. Notice that when b=e, the log(asum_amplitude) has an arbitrarily large negative singularity, but the adjust term is still defined. I actually plan to calculate the asum and the adjust term for base e, iterating exp(z)-1. It has an infinite period, but the asum from the upper super function is defined at \( \frac{\pi i}{3} \), which I think turns out to be equivalent. If I have time, I'll post more later. Good luck with your matrix and derivative calculations.
- Sheldon
   
#26
In this post I describe a current problem of mine when trying to compute the derivative of the asum(x) depending on x with my matrix-ansatz.
Remark: It might look as a step-back behind Sheldon's power-series solution and the possibilities of computations/analyses derived from there, and so might seem superfluous or even ignorant. But it's not meant as this - I just want to complete the formulation of the Carleman/Neumann-matrix-ansatz for this group of questions too. But here I stumble in a tiny, but mind-bending problem. To expose everything as clean as possible I separate the planned post into two: first the simple, but more complete than before, re-statement of the matrix-representation and the next post with the problem/question concerning derivatives.


I'm beginning with the restatement of the general problem of this iteration series in usual computation per Cesaro/Euler-summation of the iteration-terms, and then in terms of two powerseries using the Bell-matrices developed at the different fixpoints and the Neumann-versions of that Bell-matrices for the iteration series.
We begin with the formulation

\( \operatorname{asum}(x) = \ldots + x_{-m} - x_{-m+1} + \ldots x_{-1} + x_0 - x_{-1} + \ldots - x_{n-1} + x_n - \ldots \)


Here \( x_k \) means \( x_k = b^{x_{k-1}}-1 \) so increasing k means iterate by \( x_{k+1}=f(x_k)=b^{x_k}-1 \) and \( x_{k-1} = g(x_k-t_1)+t_1 \) where \( g(x) = \log(1+x_k+t_1)/\log(b) -t_1 \) and \( t_1 \) is the upper fixpoint.

Thanks to our powerful computers this can be approximated(!) in reasonable time, say about a second using sumalt() in Pari/GP and, say standard 200 digits precision.
The conversion of this into a problem using powerseries (besides of the advantages of having it in common analytical terms) provides a much faster computation; it needs about only the 20'th part of the time.
The principle of the idea is to separate the iteration series into three parts

\( \begin{array} {llll}
\operatorname{asum}(x) &=& \ldots +x_{-2} - x_{-1} &+ x_0 \\
& & &+ x_0 - x_1 + x_2 - \ldots \\
& & & - x_0
\end{array}
\)


find power series for the two infinite partial iteration series and correct for the doubly occuring term \( x_0 \) by subtraction of one instance.
I'd already solved this problem; I'll repeat the statement here because the open problem which I want adress below is directly associated with this.

First let's denote the partial iteration series with positive indexes at the \( x \) whose terms go towards the lower fixpoints, as " \( \operatorname{asuma}(x) \) " and that with the negative indexes going to the upper fixpoint as " \( \operatorname{asumb}(x) \) ".

Second we denote the Bell-matrix for \( f(x) = b^x - 1 \) developed at the lower fix-point \( t_0=0 \) as \( U_0 \) and that for \( g(x)=\log(1+x+t_1)/\log(b)-t_1 \) developed at the higher fixpoint \( t_1>0 \) as \( U_1 \)

Third we identify the matrices, which provide the power series for the partial iteration series towards the lower fixpoint \( \operatorname{asuma}(x) \) and that towards the upper fixpoint \( \operatorname{asumb} (x) \) , as

\( A_0 = (I + U_0)^{-1} \)

and
\( A_1 = (I + U_1)^{-1} \)



Then -in principle- we can compute:

\( \begin{array} {rcl} \operatorname{asuma}(x) &=& \sum_{k=0}^\infty a_{0,k}*x^k \\
\operatorname{asumb}(x) &=& \frac {t_1}2 + \sum_{k=0}^\infty a_{1,k}*(x-t_1)^k \end{array} \)


(using the \( a_{0,k} \) from the second column of \( A_0 \) and the \( a_{1,k} \) from the second column of \( A_1 \) ) and the asum(x) by

\( \operatorname{asum}(x) = \operatorname{asuma}(x) + \operatorname{asumb}(x) - x \)



The two occuring power series seem to be entire, but that's not yet known. Anyway, to get convergence to reasonable precision with finitely many terms, say 64, the argument x should be in the near of the fixpoints. Fortunately this can be achieved by exact computations and a couple of iterations of x using the original functions \( f(x) \) and \( g(x) \) . So if we find that we need n iterations to shift x sufficiently near to the lower fixpoint (by iteration of f(x)) and m iterations to shift it to the upper fixpoint (by iterations of g(x)) then we can rewrite the three parts of the \( \operatorname{asum}(x) \) in the following way. Using


\( \operatorname{asumae}(x_m,x_n) = \sum_{h=m}^n (-1)^h x_{h} \)


we get for the whole expression

\( \operatorname{asum}(x) = \operatorname{asuma}(x_{n}) + \operatorname{asumae}(x_{n-1},x_{-m+1}) + \operatorname{asumb}(x_{-m}) \)


where by the implemented procedure the m and n are to be determined such that the two power series converge sufficiently fast (for instance, I've taken them such that the resulting x is less than 0.1 from the respective fixpoint.


In my practical computations this speeds up the computations to a factor of 20, so that bulks of analyses can be done in reasonable time.


The next step is to use this scheme also for the computation of the derivative (for instance to find the zero and the extrema of the \( \operatorname{asum}() \) depending of the x via newton-iteration).
Everyting (except time consumtion, and the limitation in precision) is fine when I compute the derivatives by numerical diffenrentiation and the serial/sumalt-evaluation of the \( \operatorname{asum}(x) \) . But trying the same matrix-ansatz as above for the derivatives using the technique of the \( \operatorname{asumae}(x_{n-1},x_{1-m}) \) fails when that sum contains more than the "trivial" term x itself . See next posting.

Gottfried Helms, Kassel
#27
(Part 2 of 2 posts)

The practical reason for using the derivatives is in my case, that I want apply the Newton-iteration to find zeros and extremas to high/arbitrary accuracy. Because the matrix-based approach is powerful to compute the original values for the \( \operatorname{asum}() \) I want to apply it here too.
The naive approach for the derivative at some x and \( \lim_{h \to 0} \) is of course

\( \operatorname{asum}(x)'={(\operatorname{asum}(x+\frac h2)-\operatorname{asum}(x- \frac h2)) \over h} \)


This can be evaluated using the numerical evaluation of the iteration-series using sumalt() or, and better, using the matrix-based method for the numerical evaluation.
But because the derivative of the series is the series of the derivatives of its terms and the terms have an analytical expression for the derivative, we could try to base an evaluation on the derivatives of the iterates. Again, the matrix-method is superior here; after the partial iteration series \( asuma() \) and \( asumb() \) are expressible as analytic power series we can simply insert the coefficients for the term-by-term-differentation, such that in a first step we would write:

\( \begin{array} {rcl} \operatorname{asuma}(x)' &=& \sum_{k=1}^\infty k * a_{0,k}*x^{k-1} \\
\operatorname{asumb}(x)' &=& 0 + \sum_{k=1}^\infty k*a_{1,k}*(x-t_1)^{k-1} \end{array} \)


and the asum(x) by

\( \operatorname{asum}(x)' = \operatorname{asuma}(x)' + \operatorname{asumb}(x)' - 1 \)


This works very well in principle; even if I use some x in the near of 3.5 both series seem to converge sufficiently well with n=64 terms. However, to improve the approximation we shoulf again shift the x towards the fixpoints for each partial series and compute also the derivatives for the individual terms around \( x=x_0 \) by the explicit analytical formulae for the individual terms, such that we had something like

\( \operatorname{asum}(x)' = \operatorname{asuma}(x_n)' + \operatorname{asumae}(x_{n-1},x_{1-m})' + \operatorname{asumb}(x_{-m})' \)



But now: I cannot make this computation correct if I assume more terms for the middle part of the series. Everything is still fine, if I use the numerically approximated derivatives for the two partial series

\( \begin{array} {rcl}
\operatorname{asum}(x)' &\sim& {\operatorname{asuma}(f(x+h/2,n))-\operatorname{asuma}(f(x-h/2,n)) \over h}\\
& + & {\operatorname{asumb}(g(x+h/2-t_1,m)+t_1) - \operatorname{asumb}(g(x-h/2-t_1,m)+t_1) \over h} \\ &+& \operatorname{asumae}(x_{n-1},x_{1-m})' \end{array} \)


where in the \( \operatorname{asumae}() \) I can use the analytical expressions for its single terms but for the h in the evaluations of the other partial series I can only go to something like \( h=1e-12 \) and not much smaller because of loss of precision .
This is especially unsatisfactory because an analytical expression seems to hang around very close to this!

I tried a couple of days to find appropriate expressions for the arguments for the matrix-based analytical derivatives of the \( \operatorname{asuma}()' \) and \( \operatorname{asumb}()' \) but got always stuck.

Gottfried
Gottfried Helms, Kassel
#28
Well, I've now fiddled out the method how to find the analytic formulae for the first and second derivative of the asum(x), in terms of power series. It is always

\( \operatorname{asum}(x) =
\operatorname{asum}_0(x,p) +
\operatorname{asum}_c(x,p,q) +
\operatorname{asum}_1(x,q)
\)
\( \operatorname{asum}'(x) =
\operatorname{asum}'_0(x,p) +
\operatorname{asum}'_c(x,p,q) +
\operatorname{asum}'_1(x,q)
\)
\( \operatorname{asum}''(x) =
\operatorname{asum}''_0(x,p) +
\operatorname{asum}''_c(x,p,q) +
\operatorname{asum}''_1(x,q)
\)


where
  • * the \( \operatorname{asum}_0(x,p) \) and \( \operatorname{asum}_1(x,q) \) are expressed as a power series in x around the respective fixpoint
    * the parameters p and q indicate initial shifts by integer iterations towards the respective fixpoint and
    * the \( \operatorname{asum}_c(x,p,q) \) contains the remaining finite alternating sum over the integer iterates \( x_{-(q-1)} \cdots x_{p-1} \) around the center \( x_0=x \) .



This requires to compute the first and second derivatives of \( x_{-q},x_{-q+1},...x_{-1},x,x_1,x_2,...,x_p, \) and for the second derivative \( \operatorname{asum}''(x) \) some rule of combination - I can provide the details if this is of interest; after that derivatives can be computed recursively this is not much amount of computation. (For the recursion for the first derivatives I found amazingly an early reference in Ramanujan's notebooks, but not yet for the second derivatives, so this all remains based on pattern recognition so far and the inductive proofs should follow another day...)

The point of this part of investigation is, to have now the possibility to invoke the Newton-iteration for the zeros and the extrema of the asum without the need of the basic, but consumptive, limit formula \( \lim_{h \to 0} { \operatorname{asum}(x+h/2)-\operatorname{asum}(x-h/2) \over h} \) which seemed to be unsatisfactory to me.

I'd like now to relate this to Sheldon's earlier posted solution as a single power series, where some reservation was expressed concerning the accuracy of achievable computation (something ~32 dec digits) . Can that power series be made arbitrary precise (at least in principle)? And if, what would be the amount of computation? And did this include the possibility of a power series for the inverse of the asum(x)?

Gottfried Helms, Kassel
#29
Continuing the previous post in a more general way: it is a somehow ironic outcome, that - after the introduction of the asum() as provider for the fractional iteration because it seemed to be independent from the fixpoint-problematic (which we encounter once we start to construct power series for fractional iteration), because we need only integer height iterations - we find now even two formulae which are depending on the fixpoints ...

Well, but leave this aside for a moment. The crucial aspect for the correctness of the representation of the iteration-series by one (or two) power series derived from the Neumann-series of the Carleman-matrices for the function and for the inverse, seems to be, that the fixpoints must be attracting, so we must center the function and the inverse around that specific fixpoint which makes it happen, that it becomes an attracting one.

If we want to generalize that whole concept to the cases of x beyond the upper fixpoint, we have then the interval for x from \( t_1 \ldots \infty \) where \( t_1 \) is still attracting for the function \( f^{\circ -1}(x) = \log_b(1+x) \) but the infinity is now attracting for f(x). Can we develop f(x) around \( \infty \)?
Or can we understand/interpret such x as iterations with complex heights (as I had proposed it for the "regular iteration" in other threads) for instance using Sheldon's power series?

In general, I'm beginning to look at the same principle but for other/simpler functions than exp(x)-1, for instance the linear function, polynomials and such, which even might lack any finite fixpoint and the alternating iteration series can still be expressed by power series - so that one might derive some common behave and thus some insight for the case of infinity as fixpoint from that simpler examples also for the case here in question ...

Gottfried
Gottfried Helms, Kassel
#30
Hmm, perhaps I have made some basic error now, or something is around which I did not understand correctly from the beginning.
It is clear, that the \( \operatorname{asum}(x) \) is 2-periodic, that means \( \operatorname{asum}(x_{-1})=\operatorname{asum}(x)=\operatorname{asum}(x_1)=... \)

The same seems obvious to me for the asum-derivatives in that periods. But I get different values for the first and second derivatives when I simply shift the center by 2 iterations. Here is a numerical protocol, where I use x1=3.2 (just a random value), and then compute the zero'th, first and second derivative:
Code:
.     [       x2=exph(x1,0),      asum(x2),     asum_deriv(x2,1), asum_deriv(x2,2)]      
      %697 = [3.20000000000, -0.00119822450167, 0.0175377529574, 0.000817628416425]        
      
      [      x2=exph(x1,2),      asum(x2),      asum_deriv(x2,1), asum_deriv(x2,2)]        
     %698 = [0.412136407584, -0.00119822450167, 0.0779236358328,   -0.129878114856]

Can someone crosscheck and possibly explain that? Or do I have only a knot in my head?


Well I asked that question also in MSE, and it might be a bit more instructive. Possibly I'm beginning to understand - but still not getting it in all of its consequences. I'll continue this observation/thoughts here later again... See the question in MSE
http://math.stackexchange.com/questions/...d-infinite

Hmm, after some hours thinking about it becomes a bit similar to the situation, when a little kitten finds its own tail first time and begins to run after it in circles... :-) Obviously I must have the answer for this already in my own analytic formulae for the evaluation of the asum and its derivatives

Gottfried
Gottfried Helms, Kassel


Possibly Related Threads…
Thread Author Replies Views Last Post
  Divergent Series and Analytical Continuation (LONG post) Caleb 54 13,284 03/18/2023, 04:05 AM
Last Post: JmsNxn
  Discussion on "tetra-eta-series" (2007) in MO Gottfried 40 9,827 02/22/2023, 08:58 PM
Last Post: tommy1729
  Half-iterate exp(z)-1: hypothese on growth of coefficients Gottfried 48 16,014 09/09/2022, 12:24 AM
Last Post: tommy1729
Question Tetration Asymptotic Series Catullus 18 6,441 07/05/2022, 01:29 AM
Last Post: JmsNxn
Question Formula for the Taylor Series for Tetration Catullus 8 4,463 06/12/2022, 07:32 AM
Last Post: JmsNxn
  Fractional iteration of x^2+1 at infinity and fractional iteration of exp bo198214 17 36,397 06/11/2022, 12:24 PM
Last Post: tommy1729
  Calculating the residues of \(\beta\); Laurent series; and Mittag-Leffler JmsNxn 0 1,295 10/29/2021, 11:44 PM
Last Post: JmsNxn
  Trying to find a fast converging series of normalization constants; plus a recap JmsNxn 0 1,235 10/26/2021, 02:12 AM
Last Post: JmsNxn
  Why the beta-method is non-zero in the upper half plane JmsNxn 0 1,420 09/01/2021, 01:57 AM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 2,611 07/22/2021, 03:37 AM
Last Post: JmsNxn



Users browsing this thread: 2 Guest(s)