• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 Parabolic Iteration andydude Long Time Fellow Posts: 509 Threads: 44 Joined: Aug 2007 08/10/2007, 08:39 PM Introduction Definition: An analytic function with a parabolic rationally neutral fixed-point is a function f(x) that is complex-analytic in x, and satisfies $f(x_0) = x_0, f'(x_0) = 1$. D. Geisler calls the iteration of this kind of function: parabolic iteration. Notation: We use $f(x) = x + \sum_{k=2}^{\infty} f_k (x - x_0)^k$ since this ensures that $f(x_0) = x_0,f'(x_0) = 1$. Examples of analytic functions with a parabolic rationally neutral fixed-point include: the sine function sin(x) $(x_0=0)$, the decremented exponential $e^x - 1$ $(x_0=0)$, the second hyper-power $x^x$ $(x_0=1)$, and so on. This means each of these functions can be iterated with parabolic iteration. Iterate-Derivative Matrices Definition: An iterate-derivative matrix (IDM) of f(x) about $x=x_0$ is a matrix $\text{IDM}[f](x_0)$ where: $IDM_{jk}[f](x_0) = (f^{[j]})^{(k)}(x_0)/k!$ and $(0 \le j,k \le \infty)$. For approximations, $\text{IDM}[f](x_0)_n$ implies $(0 \le j,k \le n)$. If f(x) is an unnamed expression, then: $\text{IDM}[f] = \text{IDM}_x[f(x)]$. If no value is given, then zero is assumed: $\text{IDM}[f] = \text{IDM}[f](0)$. As you can see, the name stems from the iterate $f^{[j]}$ and the derivative $g^{(k)}$ in the definition. For an example of an IDM for this kind of function, using $f_k = f^{(k)}(x_0)/k!$: $ \text{IDM}[f](x_0)_4 = \left( \begin{array}{ccccc} x_0 & 1 & 0 & 0 & 0 \\ x_0 & 1 & f_2 & f_3 & f_4 \\ x_0 & 1 & 2f_2 & 2f_2^2 + 2f_3 & f_2^3 + 5f_2f_3 + 2f_4 \\ x_0 & 1 & 3f_2 & 6f_2^2 + 3f_3 & 9f_2^3 + 15f_2f_3 + 3f_4 \\ x_0 & 1 & 4f_2 & 12f_2^2 + 4f_3 & 30f_2^3 + 30f_2f_3 + 4f_4 \end{array} \right)$ Using Lagrange interpolation, all columns can be turned into finite polynomials: Column 0: $x_0$ Column 1: 1 Column 2: $jf_2$ Column 3: $2\left({j \atop 2}\right)f_2^2 + jf_3$ Column 4: $\left(6\left({j \atop 3}\right) + \left({j \atop 2}\right)\right)f_2^3 + 5\left({j \atop 2}\right)f_2f_3 + jf_4$ One way of doing parabolic iteration is simply to use the IDM and Lagrange interpolation. Since you know that column k is going to be a polynomial in j of degree $k-1$ you only need to use k points for the Lagrange interpolating polynomial. Putting it all together (using $f_k = f^{(k)}(x_0)/k!$), we get: $f^{[t]}(x) = x_0 + (x - x_0) + t f_2 (x - x_0)^2 + \left( 2\left({t \atop 2}\right)f_2^2 + t f_3 \right)(x - x_0)^3 + \cdots$ which expresses the continuous iteration of an analytic function with a parabolic rationally neutral fixed-point. For this formula to apply to continuous iteration and not just discrete iteration, the gamma function must be used to evaluate the binomial coefficient, or expand it to a polynomial first. Notice that we obtained this formula using the IDM and Lagrange interpolation. We will compare this with other formulas we will derive later. Power-Derivative Matrices Definition: A power-derivative matrix (PDM) of f(x) about $x=0$ is a matrix $\text{PDM}[f]$ where $PDM_{jk}[f] = (f^j)^{(k)}(0)/k!$, and $(0 \le j,k \le \infty)$. For approximations, $\text{PDM}[f]_n$ implies $(0 \le j,k \le n)$. If f(x) is an unnamed expression, then: $\text{PDM}[f] = \text{PDM}_x[f(x)]$. If $x_0$ is given, then: $\text{PDM}[f](x_0) = \text{PDM}_x[x-x_0]\text{PDM}[f]\text{PDM}_x[x+x_0]$. The PDM about $x=x_0$ is defined such that: $\text{PDM}[f](x_0)^n = \text{PDM}_x[x-x_0]\text{PDM}[f]^n\text{PDM}_x[x+x_0]$ and is analogous to using a function $g(x) = f(x + x_0) - x_0$ such that g(0) = 0 if and only if $f(x_0) = x_0$. Notice that $g^{[2]}(x) = f(f(x + x_0) - x_0 + x_0) - x_0 = f^{[2]}(x + x_0) - x_0$, supporting said property. The reason why it is analogous is because the PDM is most well-known for turning function composition into matrix multiplication. This is also the reason for the $1/k!$ in the definition. This can best be seen with the following identities: $\text{PDM}_x[f(g(x))] = \text{PDM}[f]\text{PDM}[g]$ $\text{PDM}_x[c f(x)] = \text{PDM}_x[cx]\text{PDM}[f]$ $\text{PDM}[f^{[n]}] = \text{PDM}[f]^n$ As you can see, the PDM turns composition into matrix multiplication, and iteration into matrix powers. This is the key to understanding PDMs. Once this is understood, one can begin thinking in linear algebra instead of function theory, for they become one and the same. History: Some variants of power-derivative matrices (PDMs) are also known as Bell matrices (if $f(0) = 0$), Carleman matrices, linearization matrices, or embedding matrices. As mentioned earlier, they turn iteration into evaluating a matrix power. Matrix powers are easy to evaluate for integer exponents, but for non-integer exponents the problem can be posed as a problem of matrix diagonalization. There are three types of matrices that are easy to diagonalize: $2\times2$ matrices, $3\times3$ matrices, and triangular matrices. History: According to Bennet, the first use of PDMs for continuous iteration was by Koch. Kowalski et.al. call the more general form a Carleman matrix, and along with Aldrovandi et.al. refer to the special case when $f(0) = 0$ as a Bell matrix. Historically, Bell matrices and Carleman matrices have been transpose to each other. Also, Bell matrices usually start with index 1 whereas Carleman matrices start with index 0. Here we use a common starting index 0 for all PDMs, since it facilitates the more general case as well. For an example of a PDM for this kind of function, using $f_k = f^{(k)}(x_0)/k!$: $ \text{PDM}[f](x_0)_3 = \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ x_0 & 1 & f_2 & f_3 \\ x_0^2 & 2x_0 & 1 + 2x_0f_2 & 2f_2 + 2x_0f_3 \\ x_0^3 & 3x_0^2 & 3x_0 + 3x_0^2f_2 & 1 + 6x_0f_2 + 3x_0^2f_3 \end{array} \right)$ and an example of the simplified case when the fixed-point is zero $(x_0=0)$: $ \text{PDM}[f]_4 = \left( \begin{array}{ccccc} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & f_2 & f_3 & f_4 \\ 0 & 0 & 1 & 2f_2 & f_2^2 + 2f_3 \\ 0 & 0 & 0 & 1 & 3f_2 \\ 0 & 0 & 0 & 0 & 1 \end{array} \right)$ As you can see, the PDM for $(x_0=0)$ is a triangular matrix, which as mentioned before lends itself easily to evaluating matrix powers. To illustrate, here is the same matrix raised to an arbitrary power: $ \text{PDM}[f]_4^t = \left( \begin{array}{ccccc} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & tf_2 & 2\left({t \atop 2}\right)f_2^2 + tf_3 & \left(6\left({t \atop 3}\right) + \left({t \atop 2}\right)\right)f_2^3 + 5\left({t \atop 2}\right)f_2f_3 + tf_4 \\ 0 & 0 & 1 & 2tf_2 & (3t^2 - 2t)f_2^2 + 2tf_3 \\ 0 & 0 & 0 & 1 & 3tf_2 \\ 0 & 0 & 0 & 0 & 1 \end{array} \right)$ Another way of finding the continuous iteration of this special kind of function is using PDMs and matrix powers. Since this matrix represents increasing powers of a function and its derivatives, we only need the first power of the function to find its coefficients, which corresponds to the first row (not the zeroth). Putting it all together (using $f_k = f^{(k)}(0)/k!$), we get: $f^{[t]}(x) = x + t f_2 x^2 + \left( 2\left({t \atop 2}\right)f_2^2 + t f_3 \right)x^3 + \cdots$ just like with the IDM method, only with $x_0=0$. With PDMs, however, there is an alternative to evaluating matrix powers. By representing the matrix multiplication as a single multiplication (instead of t-1 multiplications) we can turn evaluating a matrix power into a recurrence equation: $PDM^t_{1k}[f] = \sum_{j=0}^{\infty}PDM^{t-1}_{1j}[f]PDM_{jk}[f]$ which after solving gives: $f^{[t]}(x) = x + \sum_{k=2}^{\infty}x^k PDM^t_{1k}[f]$ as required. Double-Binomial Expansions The first to devote a full-length paper to this method was S. C. Woon. The expansion is based on the binomial theorem, and as such, is easy to understand. Woon originally described his method in terms of the iteration of operators, but here we describe it in terms of functions: $ \begin{array}{rl} f^{[t]}(x) & = (w + (f - w))^{[t]}(z) = w^t(1 + (f/w - 1))^{[t]}(x) \\ & = w^t\left(\sum_{n=0}^{\infty} \left({t \atop n}\right) \left[\frac{1}{w}f - 1\right]^{[n]}\right)(x) \\ & = w^t\left(\sum_{n=0}^{\infty} \left({t \atop n}\right) \sum_{m=0}^{n} \left({n \atop m}\right)(-1)^{n-m}\left(\frac{1}{w}f\right)^{[m]}\right)(x) \\ & = w^t \sum_{n=0}^{\infty} \left({t \atop n}\right) \sum_{m=0}^{n} \left({n \atop m}\right)(-1)^{n-m} w^{-m} f^{[m]}(x) \end{array}$ where $f^0(x) = x$, w is a parameter, and of course: $\left({t \atop n}\right) = \frac{1}{n!} \prod_{k=0}^{n-1} (t - k)$. The convergence of Woon's series is dependant on w, but the convergence properties are not well known. Woon's expansion works for many analytic functions in general, but again, the convergence properties are not well known. For those with parabolic fixed-points, another expansion can be used. If $f(0) = 0$ and $f'(0) = 1$, then: $ f^{[t]}(x) = \sum_{n=1}^{\infty} x^n \sum_{k=0}^{n-1} \left({t \atop k}\right) \left({t - k - 1 \atop n - k - 1}\right) (f^{[k]})^{(n)}(0)/n!$ as noted by Trappmann. This expansion basically expresses the series coefficients of non-integer iterations in terms of the series coefficients of integer iterates. This brings us full circle, since the last part of the above expression is exactly the definition of an IDM. These three methods are very similar and inter-connected, and as such, are all tools of parabolic iteration. References R. Aldrovandi, Special Matrices of Mathematical Physics, World Scientific, 2001. R. Aldrovandi and L.P. Freitas, Continuous Iteration of Dynamical Maps, Journal of Math. Phys. 39, 5324, 1998. E. T. Bell, The Iterated Exponential Integers, The Annals of Mathematics, 2nd Ser., Vol. 39, No. 3. (Jul., 193, pp. 539-557. T. Carleman, Acta Mathematica 59, 63, 1932. D. Geisler, http://Tetration.org, (accessed: 2006). H. v. Koch, Bil. t. Sv. Vet. Ak. Hand. 1, Math. 25, m'em. no. 5, pp. 1-24, (1900). H. Trappmann, Arborescent Numbers: Higher Arithmetic Operations and Division Trees, http://blafoo.de/tree-aoc/main1176.pdf, (accessed: 2006). H. Trappmann and I. Dahl, Tetration extended to real exponents, sci.math.research, http://mathforum.org/kb/message.jspa?mes...0&tstart=0, (2006). P. Gralewicz and K. Kowalski, Continuous time evolution from iterated maps and Carleman linearization, http://arxiv.org/abs/math-ph/0002044. H. S. Wilf, Generatingfunctionology, Academic Press, (1990). S.C. Woon, Analytic Continuation of Operators -- Operators acting complex s-times, http://arxiv.org/abs/hep-th/9707206. bo198214 Administrator Posts: 1,389 Threads: 90 Joined: Aug 2007 08/10/2007, 10:04 PM Oh damn, you were faster than me. I was just preparing a similar post! So there remains for me the task of adding and correcting First in the formula for the double binomial expansion the coefficient (-1)^{n-1-i} is missing. The correct formula is (I follow the usage of Ecalle and write $f^{\circ t}$ for the $t$-th iteration instead of $f^{[t]}$ as you write it) : ${f^{\circ t}}_n=\sum_{i=0}^{n-1} (-1)^{n-1-i}\left(t\\i\right)\left(t-1-i\\n-1-i\right){f^{\circ i}}_{n}$ A criterion for the convergence of this expression is given by Jabotinsky and Erdös in [1]: Theorem 1: If the radius of convergence of the series $L(z)=\frac{\partial f^{\circ s}(z)}{\partial s}|_{s=0}$, where $\left({\frac{\partial f^{\circ s}}{\partial s}|_{s=0}}\right)_n = \sum_{i=1}^{n-1} \frac{(-1)^{i+1}}{i} {f^{\circ i}}_n$, is $\varrho>0$ then there are radii $\varrho(s)>0$ such that $f^{\circ s}(z)$ is analytic in s and in z for all finite complex s and for $|z|<\varrho(s)$ Theorem 2: If the radius of convergence of L(z) is 0 then the radius of convergence of $f^{\circ s}(z)$ is 0 for almost all complex s and for almost all real s. A famous example for this second case is $f(x)=e^x-1$, see [2]. It has a fixed point at 0, and is parabolic: $(e^x-1)'(0)=e^0=1$. But the parabolic iterates $f^{\circ s}$ converge only for integer s. However this looks more terrible than it is, because there are so called asymptotic expansions. That means that in the development point the series does not converge, but the function approximates in a certain way the (formal) power series in the point of development, see [5]. For this case Ecalle [3] showed, that there is a unique continuous iteration that has the formal continuous iteration as its asymptotic expansion (for series with $f_0=0$ and $f_1=1$). The first however who treated this case was Szekeres in [4]. [1] P. Erdös and E. Jabotinksy, On analytic iteration, J. Analyse Math. 8, 1960/1961, 361-376. [2] I. N. Baker, Zusammensetzungen ganzer Funktionen, Math. Z 69, 1958, 121-163. [3] J. Ecalle, Théorie des invariants holomorphes, Publications mathématiques d'Orsay 67-74 09, 1974. [4] G. Szekeres, Regular iteration of real and complex functions, Acta Math. 100, 1958, 203-258. [5] W. Balser, From divergent power series to analytic functions, Lecture Notes in Mathematics, Springer, 1994. jaydfox Long Time Fellow Posts: 440 Threads: 31 Joined: Aug 2007 08/15/2007, 08:18 PM bo198214 Wrote:A famous example for this second case is $f(x)=e^x-1$, see [2]. It has a fixed point at 0, and is parabolic: $(e^x-1)'(0)=e^0=1$. But the parabolic iterates $f^{\circ s}$ converge only for integer s. However this looks more terrible than it is, because there are so called asymptotic expansions. That means that in the development point the series does not converge, but the function approximates in a certain way the (formal) power series in the point of development, see [5]. For this case Ecalle [3] showed, that there is a unique continuous iteration that has the formal continuous iteration as its asymptotic expansion (for series with $f_0=0$ and $f_1=1$). The first however who treated this case was Szekeres in [4]. Sweet, I had concluded the same thing and was trying to turn my conceptual proof into a formal proof. But if the work's been done already, then I can breathe easy. Essentially, if k is the number of terms at which we truncate the series expansion, then there is a non-zero radius for which the series is initially convergent (i.e., the root-test for terms 1 through k would all be less than 1). As k is increased the radius of initial convergence decreases towards zero, but the evalutation of the series well inside that radius (e.g., within 1/2 that radius) converges asymptotically, and we can define an alternating Cauchy sequence (2 terms for each k, a least upper bound and a greatest lower bound) that has a definite limit. Within the 1/2 radius, for example, we can exponentially constrain the change in each successive term of the sequence (up to k), allowing us to define a least upper bound and a greatest lower bound for the asymptote, with the distance between these bounds decreasing with increasing k. Sounds nice, but showing it formally is proving difficult with my lack of rigorous formal training in mathematics. Regardless, if the proof has already been shown, then combined with my change of base formula, we now have a unique solution to tetration of bases greater than eta. By the way, for the reference to Ecalle, where can I get a copy, and more importantly, is there an English translation available? ~ Jay Daniel Fox bo198214 Administrator Posts: 1,389 Threads: 90 Joined: Aug 2007 08/16/2007, 07:51 PM (This post was last modified: 08/16/2007, 07:54 PM by bo198214.) Moderator's note: I moved the subsequent posts into the thread Change of base formula for Tetration. « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post Is this THE equation for parabolic fix ? tommy1729 5 6,217 04/16/2015, 10:01 PM Last Post: tommy1729 Iteration series: Different fixpoints and iteration series (of an example polynomial) Gottfried 0 2,901 09/04/2011, 05:59 AM Last Post: Gottfried Fractional iteration of x^2+1 at infinity and fractional iteration of exp bo198214 10 16,884 06/09/2011, 05:56 AM Last Post: bo198214 Parabolic Iteration, again andydude 11 13,688 05/14/2008, 02:56 AM Last Post: andydude

Users browsing this thread: 1 Guest(s)