• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 [AIS] (alternating) Iteration series: Half-iterate using the AIS? andydude Long Time Fellow Posts: 509 Threads: 44 Joined: Aug 2007 12/18/2012, 06:05 AM (12/15/2012, 06:37 AM)Gottfried Wrote: I think moreover, that this shall prove useful, once we shall step further to analytically continue the range for the x and for the base b outside the "safe intervals" and enter the realms of truly divergent series for the asum(x). Thanks for the links, I remember reading about your alternating series a while ago, and forgot where it went! I also remember having some interesting thoughts last time, but can't remember what they were. I will re-read them and see if I can make some comments. I feel in the mood for tetration this season! Andrew Robbins Gottfried Ultimate Fellow Posts: 871 Threads: 127 Joined: Aug 2007 12/18/2012, 11:17 AM (12/17/2012, 10:19 PM)sheldonison Wrote: At the real axis near the neighborhood of 3, the asum itself has a very small amplitude, of approximately 7.66E-12, as compared to an amplitude of 0.012 for Gottfried's asum(1.3^z-1) example.Here is an earlier plot for the decreasing of the amplitude as the base b goes to 2: !     Gottfried Helms, Kassel sheldonison Long Time Fellow Posts: 684 Threads: 24 Joined: Oct 2008 12/22/2012, 04:12 PM (This post was last modified: 01/06/2013, 03:01 PM by sheldonison.) (12/18/2012, 11:17 AM)Gottfried Wrote: (12/17/2012, 10:19 PM)sheldonison Wrote: At the real axis near the neighborhood of 3, the asum itself has a very small amplitude, of approximately 7.66E-12, as compared to an amplitude of 0.012 for Gottfried's asum(1.3^z-1) example. typo: off by 2x, the amplitude is approximately 1.533E-11Here is an earlier plot for the decreasing of the amplitude as the base b goes to 2:I have an equation for the approximate log(amplitude), for the alternating sum, iterating b^z-1. The approximation works for bases0$ as $U_1$ Third we identify the matrices, which provide the power series for the partial iteration series towards the lower fixpoint $\operatorname{asuma}(x)$ and that towards the upper fixpoint $\operatorname{asumb} (x)$ , as $A_0 = (I + U_0)^{-1}$ and $A_1 = (I + U_1)^{-1}$ Then -in principle- we can compute: $\begin{array} {rcl} \operatorname{asuma}(x) &=& \sum_{k=0}^\infty a_{0,k}*x^k \\ \operatorname{asumb}(x) &=& \frac {t_1}2 + \sum_{k=0}^\infty a_{1,k}*(x-t_1)^k \end{array}$ (using the $a_{0,k}$ from the second column of $A_0$ and the $a_{1,k}$ from the second column of $A_1$ ) and the asum(x) by $\operatorname{asum}(x) = \operatorname{asuma}(x) + \operatorname{asumb}(x) - x$ The two occuring power series seem to be entire, but that's not yet known. Anyway, to get convergence to reasonable precision with finitely many terms, say 64, the argument x should be in the near of the fixpoints. Fortunately this can be achieved by exact computations and a couple of iterations of x using the original functions $f(x)$ and $g(x)$ . So if we find that we need n iterations to shift x sufficiently near to the lower fixpoint (by iteration of f(x)) and m iterations to shift it to the upper fixpoint (by iterations of g(x)) then we can rewrite the three parts of the $\operatorname{asum}(x)$ in the following way. Using $\operatorname{asumae}(x_m,x_n) = \sum_{h=m}^n (-1)^h x_{h}$ we get for the whole expression $\operatorname{asum}(x) = \operatorname{asuma}(x_{n}) + \operatorname{asumae}(x_{n-1},x_{-m+1}) + \operatorname{asumb}(x_{-m})$ where by the implemented procedure the m and n are to be determined such that the two power series converge sufficiently fast (for instance, I've taken them such that the resulting x is less than 0.1 from the respective fixpoint. In my practical computations this speeds up the computations to a factor of 20, so that bulks of analyses can be done in reasonable time. The next step is to use this scheme also for the computation of the derivative (for instance to find the zero and the extrema of the $\operatorname{asum}()$ depending of the x via newton-iteration). Everyting (except time consumtion, and the limitation in precision) is fine when I compute the derivatives by numerical diffenrentiation and the serial/sumalt-evaluation of the $\operatorname{asum}(x)$ . But trying the same matrix-ansatz as above for the derivatives using the technique of the $\operatorname{asumae}(x_{n-1},x_{1-m})$ fails when that sum contains more than the "trivial" term x itself . See next posting. Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 871 Threads: 127 Joined: Aug 2007 12/23/2012, 05:59 PM (This post was last modified: 12/24/2012, 01:05 AM by Gottfried.) (Part 2 of 2 posts) The practical reason for using the derivatives is in my case, that I want apply the Newton-iteration to find zeros and extremas to high/arbitrary accuracy. Because the matrix-based approach is powerful to compute the original values for the $\operatorname{asum}()$ I want to apply it here too. The naive approach for the derivative at some x and $\lim_{h \to 0}$ is of course $\operatorname{asum}(x)'={(\operatorname{asum}(x+\frac h2)-\operatorname{asum}(x- \frac h2)) \over h}$ This can be evaluated using the numerical evaluation of the iteration-series using sumalt() or, and better, using the matrix-based method for the numerical evaluation. But because the derivative of the series is the series of the derivatives of its terms and the terms have an analytical expression for the derivative, we could try to base an evaluation on the derivatives of the iterates. Again, the matrix-method is superior here; after the partial iteration series $asuma()$ and $asumb()$ are expressible as analytic power series we can simply insert the coefficients for the term-by-term-differentation, such that in a first step we would write: $\begin{array} {rcl} \operatorname{asuma}(x)' &=& \sum_{k=1}^\infty k * a_{0,k}*x^{k-1} \\ \operatorname{asumb}(x)' &=& 0 + \sum_{k=1}^\infty k*a_{1,k}*(x-t_1)^{k-1} \end{array}$ and the asum(x) by $\operatorname{asum}(x)' = \operatorname{asuma}(x)' + \operatorname{asumb}(x)' - 1$ This works very well in principle; even if I use some x in the near of 3.5 both series seem to converge sufficiently well with n=64 terms. However, to improve the approximation we shoulf again shift the x towards the fixpoints for each partial series and compute also the derivatives for the individual terms around $x=x_0$ by the explicit analytical formulae for the individual terms, such that we had something like $\operatorname{asum}(x)' = \operatorname{asuma}(x_n)' + \operatorname{asumae}(x_{n-1},x_{1-m})' + \operatorname{asumb}(x_{-m})'$ But now: I cannot make this computation correct if I assume more terms for the middle part of the series. Everything is still fine, if I use the numerically approximated derivatives for the two partial series $\begin{array} {rcl} \operatorname{asum}(x)' &\sim& {\operatorname{asuma}(f(x+h/2,n))-\operatorname{asuma}(f(x-h/2,n)) \over h}\\ & + & {\operatorname{asumb}(g(x+h/2-t_1,m)+t_1) - \operatorname{asumb}(g(x-h/2-t_1,m)+t_1) \over h} \\ &+& \operatorname{asumae}(x_{n-1},x_{1-m})' \end{array}$ where in the $\operatorname{asumae}()$ I can use the analytical expressions for its single terms but for the h in the evaluations of the other partial series I can only go to something like $h=1e-12$ and not much smaller because of loss of precision . This is especially unsatisfactory because an analytical expression seems to hang around very close to this! I tried a couple of days to find appropriate expressions for the arguments for the matrix-based analytical derivatives of the $\operatorname{asuma}()'$ and $\operatorname{asumb}()'$ but got always stuck. Gottfried Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 871 Threads: 127 Joined: Aug 2007 01/02/2013, 03:28 PM (This post was last modified: 01/02/2013, 05:58 PM by Gottfried.) Well, I've now fiddled out the method how to find the analytic formulae for the first and second derivative of the asum(x), in terms of power series. It is always $\operatorname{asum}(x) = \operatorname{asum}_0(x,p) + \operatorname{asum}_c(x,p,q) + \operatorname{asum}_1(x,q) $ $\operatorname{asum}'(x) = \operatorname{asum}'_0(x,p) + \operatorname{asum}'_c(x,p,q) + \operatorname{asum}'_1(x,q) $ $\operatorname{asum}''(x) = \operatorname{asum}''_0(x,p) + \operatorname{asum}''_c(x,p,q) + \operatorname{asum}''_1(x,q) $ where* the $\operatorname{asum}_0(x,p)$ and $\operatorname{asum}_1(x,q)$ are expressed as a power series in x around the respective fixpoint * the parameters p and q indicate initial shifts by integer iterations towards the respective fixpoint and * the $\operatorname{asum}_c(x,p,q)$ contains the remaining finite alternating sum over the integer iterates $x_{-(q-1)} \cdots x_{p-1}$ around the center $x_0=x$ . This requires to compute the first and second derivatives of $x_{-q},x_{-q+1},...x_{-1},x,x_1,x_2,...,x_p,$ and for the second derivative $\operatorname{asum}''(x)$ some rule of combination - I can provide the details if this is of interest; after that derivatives can be computed recursively this is not much amount of computation. (For the recursion for the first derivatives I found amazingly an early reference in Ramanujan's notebooks, but not yet for the second derivatives, so this all remains based on pattern recognition so far and the inductive proofs should follow another day...) The point of this part of investigation is, to have now the possibility to invoke the Newton-iteration for the zeros and the extrema of the asum without the need of the basic, but consumptive, limit formula $\lim_{h \to 0} { \operatorname{asum}(x+h/2)-\operatorname{asum}(x-h/2) \over h}$ which seemed to be unsatisfactory to me. I'd like now to relate this to Sheldon's earlier posted solution as a single power series, where some reservation was expressed concerning the accuracy of achievable computation (something ~32 dec digits) . Can that power series be made arbitrary precise (at least in principle)? And if, what would be the amount of computation? And did this include the possibility of a power series for the inverse of the asum(x)? Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 871 Threads: 127 Joined: Aug 2007 01/02/2013, 04:05 PM (This post was last modified: 01/02/2013, 04:09 PM by Gottfried.) Continuing the previous post in a more general way: it is a somehow ironic outcome, that - after the introduction of the asum() as provider for the fractional iteration because it seemed to be independent from the fixpoint-problematic (which we encounter once we start to construct power series for fractional iteration), because we need only integer height iterations - we find now even two formulae which are depending on the fixpoints ... Well, but leave this aside for a moment. The crucial aspect for the correctness of the representation of the iteration-series by one (or two) power series derived from the Neumann-series of the Carleman-matrices for the function and for the inverse, seems to be, that the fixpoints must be attracting, so we must center the function and the inverse around that specific fixpoint which makes it happen, that it becomes an attracting one. If we want to generalize that whole concept to the cases of x beyond the upper fixpoint, we have then the interval for x from $t_1 \ldots \infty$ where $t_1$ is still attracting for the function $f^{\circ -1}(x) = \log_b(1+x)$ but the infinity is now attracting for f(x). Can we develop f(x) around $\infty$? Or can we understand/interpret such x as iterations with complex heights (as I had proposed it for the "regular iteration" in other threads) for instance using Sheldon's power series? In general, I'm beginning to look at the same principle but for other/simpler functions than exp(x)-1, for instance the linear function, polynomials and such, which even might lack any finite fixpoint and the alternating iteration series can still be expressed by power series - so that one might derive some common behave and thus some insight for the case of infinity as fixpoint from that simpler examples also for the case here in question ... Gottfried Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 871 Threads: 127 Joined: Aug 2007 01/03/2013, 07:07 AM (This post was last modified: 01/03/2013, 04:00 PM by Gottfried.) Hmm, perhaps I have made some basic error now, or something is around which I did not understand correctly from the beginning. It is clear, that the $\operatorname{asum}(x)$ is 2-periodic, that means $\operatorname{asum}(x_{-1})=\operatorname{asum}(x)=\operatorname{asum}(x_1)=...$ The same seems obvious to me for the asum-derivatives in that periods. But I get different values for the first and second derivatives when I simply shift the center by 2 iterations. Here is a numerical protocol, where I use x1=3.2 (just a random value), and then compute the zero'th, first and second derivative: Code:.     [       x2=exph(x1,0),      asum(x2),     asum_deriv(x2,1), asum_deriv(x2,2)]             %697 = [3.20000000000, -0.00119822450167, 0.0175377529574, 0.000817628416425]                      [      x2=exph(x1,2),      asum(x2),      asum_deriv(x2,1), asum_deriv(x2,2)]              %698 = [0.412136407584, -0.00119822450167, 0.0779236358328,   -0.129878114856] Can someone crosscheck and possibly explain that? Or do I have only a knot in my head? Well I asked that question also in MSE, and it might be a bit more instructive. Possibly I'm beginning to understand - but still not getting it in all of its consequences. I'll continue this observation/thoughts here later again... See the question in MSE http://math.stackexchange.com/questions/...d-infinite Hmm, after some hours thinking about it becomes a bit similar to the situation, when a little kitten finds its own tail first time and begins to run after it in circles... :-) Obviously I must have the answer for this already in my own analytic formulae for the evaluation of the asum and its derivatives Gottfried Gottfried Helms, Kassel « Next Oldest | Next Newest »

 Possibly Related Threads… Thread Author Replies Views Last Post Half-iterate exp(z)-1: hypothese on growth of coefficients Gottfried 48 2,261 09/09/2022, 12:24 AM Last Post: tommy1729 Tetration Asymptotic Series Catullus 18 1,160 07/05/2022, 01:29 AM Last Post: JmsNxn Formula for the Taylor Series for Tetration Catullus 8 1,522 06/12/2022, 07:32 AM Last Post: JmsNxn Fractional iteration of x^2+1 at infinity and fractional iteration of exp bo198214 17 29,665 06/11/2022, 12:24 PM Last Post: tommy1729 Calculating the residues of $$\beta$$; Laurent series; and Mittag-Leffler JmsNxn 0 696 10/29/2021, 11:44 PM Last Post: JmsNxn Trying to find a fast converging series of normalization constants; plus a recap JmsNxn 0 641 10/26/2021, 02:12 AM Last Post: JmsNxn Why the beta-method is non-zero in the upper half plane JmsNxn 0 837 09/01/2021, 01:57 AM Last Post: JmsNxn Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 1,519 07/22/2021, 03:37 AM Last Post: JmsNxn Perhaps a new series for log^0.5(x) Gottfried 3 5,511 03/21/2020, 08:28 AM Last Post: Daniel Half-iterates and periodic stuff , my mod method [2019] tommy1729 0 2,600 09/09/2019, 10:55 PM Last Post: tommy1729

Users browsing this thread: 1 Guest(s)