• 1 Vote(s) - 5 Average
• 1
• 2
• 3
• 4
• 5
 Iteration exercises: f(x)=x^2 - 0.5 ; Fixpoint-irritation... tommy1729 Ultimate Fellow Posts: 1,358 Threads: 330 Joined: Feb 2009 06/04/2011, 09:43 PM (This post was last modified: 06/04/2011, 10:26 PM by tommy1729.) (06/04/2011, 01:13 PM)Gottfried Wrote: Sometimes we find easter-eggs even after easter... For the alternating iteration-series $\hspace{48} sn(x,p)=\sum_{h=0}^{\infty} (-1)^h g(x,h) ^p \\ \hspace{48} \text{where } g(x)=\sqrt{0.5+x} \hspace{48} g(x,h)=g(g(x,h-1)) \hspace{48} g(x,1)=g(x)$ (definitions as copied and extended from previous post, see below) we find a rational polynomial for p=4. That means $\hspace{48} x^4 - g(x)^4 + g(x,2)^4-g(x,3)^4+ ... - ... \\ \hspace{48}= sn(x,4) = 1/8 - x^2 + x^4$ (maybe this is trivial and a telescoping sum only, didn't check this thorough)
Another one: $\hspace{48} sn(x,1)+sn(x,2) = -0.25 + x^2 $
Code:\\ define function f(x) for forward iteration and g(x) for backward iteration (=negative height) \\(additional parameter h for positive integer heights is possible) f(x,h=1) = for(k=1,h,x = x^2 - 0.5 ); return (x) ; g(x,h=1) = for(k=1,h,x = sqrt(0.5 + x) ); return (x) ; \\ do analysis at central value for alternating sums x0=1 x = 1.0 sp(x) = sumalt(h=0,(-1)^h * f(x , h)) sn(x) = sumalt(h=0,(-1)^h * g(x , h)) y(x) = sp(x) + sn(x) - x this is not my expertise ... yet. but i think i have seen those before in some far past. for starters , i related your sums to equations of type f(x) = f(g(x)). also , ergodic theory studies averages of type F(x) = lim n-> oo 1/n (f^[0](x) + f^[1](x) + ... f^[n](x).) hidden telescoping can indeed occur. and sometimes we can rewrite to an integral. but again , this is not my expertise yet. you gave me extra question instead of an answer :p in particular i do not understand your matrix idea in this thread. my guess is that when you start at 1.0 , you use carleman matrices to compute the sum and one carleman matrix will not converge ( lies outside the radius ) for 1.0 ; so one is wrong and the other is not. talking about alternating series 1/2 -1/3 + 1/5 -1/7 + 1/11 - ... i believe this has a closed form/name and if i recall correctly its called the first mertens constant ... there was something else i wanted to say ... forgot :s edit : i do not know how to rewrite an average as a sum or superfunction ( do know integral and perhaps infinite product )... i say that because it might be usefull to see the link with the " ergodic average " ( or whatever its called ). it bothers me , i wanna get rid of this " lim **/n " term for averages. ( might also be of benefit for number theory and statistics ) Gottfried Ultimate Fellow Posts: 757 Threads: 116 Joined: Aug 2007 06/05/2011, 10:50 AM (06/04/2011, 09:43 PM)tommy1729 Wrote: in particular i do not understand your matrix idea in this thread.You may look at alternating sum of iterates (here: of exponential function) There I describe the method first time however with another function as basis: the exponential function. The problem of convergence of series of matrices surfaces, and the question of convergence of the shortcutformula for the geometric series especially. Nearly everything was completely new for me, so this article should be rewritten; anyway in its naivety it might be a good introductory impulse to understand the key idea for that matrix-method and possibly engage in the area which I call now "iteration-series" in resemblance to "powerseries" and "dirichletseries". Gottfried Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 757 Threads: 116 Joined: Aug 2007 06/05/2011, 11:40 AM (This post was last modified: 06/05/2011, 12:35 PM by Gottfried.) Looking back at the article on the alternating iteration-series of exponential there was some confirmation for the matrix-based method missing. While I could use the serial summation (Abel- or Eulersummation of the explicite iterates) for the crosscheck of the matrix-method for the bases, where the powertower of infinite height converges, I could not do that for the other bases due to too fast growth of terms/iterated exponentials. But well, if I take the (complex) fixpoint t as initial value, then the alternating series $spa_b(t)= t-b^t+ b^{b^t}-...+...$ reduces to $spa_b(t)= t-t+ t-...+... =t/2 \text{(Euler sum)}$ , which should be meaningful for each base, whether its exponential fixpoint is real or not. With this I have now (at least) one check-value by serial summation for the comparision with the matrix-method. The matrix-method, dimension 32x32, for instance for base e, which has a divergent iteration-series, comes out near the expected result $spa_e(t)=t/2 \approx 0.159066 + 0.668618*I$ to three/four digits and the same was true for the conjugate of t. If the convergence could be accelerated, then this gives another confirmation for the applicability of this method for the iteration-series. Gottfried Helms, Kassel tommy1729 Ultimate Fellow Posts: 1,358 Threads: 330 Joined: Feb 2009 06/05/2011, 01:45 PM (03/03/2009, 12:15 PM)Gottfried Wrote: serial summation 0.709801988103 towards 2'nd fixpoint: $\sum_{h=0}^{\infty} (-1)^h * f^{\circ h}(1.0)$ 0.419756033790 towards 1'st fixpoint: $\sum_{h=0}^{\infty} (-1)^h * f^{\circ -h}(1.0)$ Matrix-method: 0.580243966210 towards 2'nd fixpoint // incorrect, doesn't match serial summation 0.419756033790 towards 1'st fixpoint // matches serial summation a reason might be this : the vandermonde matrix must have a determinant <> 1 for almost all functions. hence the determinant of f^h(x) and f^-h(x) cannot both satisfy to be in the radius ( determinant < 1 = within radius 1 ) for (1 - A)^-1. basicly just taylor series radius argument for matrices. have you considered this ? if i am correct about that , the question becomes : what if the determinant of f(x) is 1 ? will the matrix method agree on both fixpoints ? Gottfried Ultimate Fellow Posts: 757 Threads: 116 Joined: Aug 2007 06/05/2011, 05:17 PM (06/05/2011, 01:45 PM)tommy1729 Wrote: if i am correct about that , the question becomes : what if the determinant of f(x) is 1 ? will the matrix method agree on both fixpoints ? How do you compute or at least estimate the determinant of an (infinite sized) Carleman-matrix (as simply transposed of "matrix-operators")? Gottfried Gottfried Helms, Kassel tommy1729 Ultimate Fellow Posts: 1,358 Threads: 330 Joined: Feb 2009 06/06/2011, 11:01 AM ive noticed we used both the terms vandermonde and carleman matrix. ofcourse its carleman matrix and not vandermonde ! also note that the 2 matrix-method number must sum to 1 !! 0.580243966210 + 0.41975603379 =0.9999999999 = 1 simply because 1/(1+x) + 1/(1+(1/x)) = 1. - which also shows the importance of the determinant !! - because of this sum = 1 , the matrix methods cannot match the serial summation.(*) this is similar to my determinant argument made before , just an equivalent restatement. * the sum of both serials is related to the equation f(g(x)) = f(x) , whereas the sum of matrix methods just gives 1 for all x. Gottfried Ultimate Fellow Posts: 757 Threads: 116 Joined: Aug 2007 06/06/2011, 12:47 PM (06/06/2011, 11:01 AM)tommy1729 Wrote: 0.580243966210 + 0.41975603379 =0.9999999999 = 1 simply because 1/(1+x) + 1/(1+(1/x)) = 1. Yes, that observation was exactly what I was discussing when I presented these considerations here since 2007; especially I had a conversation with Andy on this. The next step which is obviously to do, is to search for the reason why powerseries-based methods disagree with the serial summation - and always only one of the results. And then possibly for some adaption/cure, so that the results can be made matching. For instance, Ramanujan-summation for divergent series includes one integral term to correct for the change-of-order-of-summation which is an internal detail in that summation method, possibly we should find something analoguous here. Quote:also note that the 2 matrix-method number must sum to 1 !! - which also shows the importance of the determinant !! - Thank you for the double exclamation. They don't introduce a determinant of an infinite sized matrix but make much noise, which I do not like as you know from earlier conversations of mine in sci.math. So I'll stop that small conversation on your postings here as I don't have to say much more relevant at the moment for the other occasional and interested reader. Gottfried Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 757 Threads: 116 Joined: Aug 2007 10/19/2017, 10:38 AM (This post was last modified: 10/19/2017, 10:40 AM by Gottfried.) (06/06/2011, 12:47 PM)Gottfried Wrote: (06/06/2011, 11:01 AM)tommy1729 Wrote: 0.580243966210 + 0.41975603379 =0.9999999999 = 1 simply because 1/(1+x) + 1/(1+(1/x)) = 1.      Yes, that observation was exactly what I was discussing when I presented these considerations here since 2007; especially I had a conversation with Andy on this. The next step which is obviously to do, is to search for the reason why powerseries-based methods disagree with the serial summation - and always only one of the results.     (...) It should be mentioned also in this thread, that the reason for this problem of matching the Carleman-based and the simple serial summation based results is simple and simple correctable. 1) The Carleman-matrix is always based on the power series of a function f(x) and more specifically of a function g(x+t_0)-t_0 where t_0 is the attracting fixpoint for the function f(x). For that option the Carleman-matrix-based and the serial summation approach evaluate to the same value.              2) But for the other direction of the iteration series, with iterates of the inverse function f^[-1] () we need the Carleman matrix developed at that fixpoint t_1 which is attracting for f^[-1](x) and do the Neumann-series then of this Carlemanmatrix. This evaluates then again correctly and in concordance with the series summation. (Of course, "serial summation" means always to possibly include Cesaro or Euler summation or the like).        So with the correct adapation of the required two Carleman-matrices and their Neumann-series we reproduce correctly the iteration-series in question in both directions.          Gottfried Gottfried Helms, Kassel sheldonison Long Time Fellow Posts: 626 Threads: 22 Joined: Oct 2008 10/19/2017, 04:50 PM (This post was last modified: 10/19/2017, 05:21 PM by sheldonison.) (10/19/2017, 10:38 AM)Gottfried Wrote: ... 1) The Carleman-matrix is always based on the power series of a function f(x) and more specifically of a function g(x+t_0)-t_0 where t_0 is the attracting fixpoint for the function f(x). For that option the Carleman-matrix-based and the serial summation approach evaluate to the same value.              2) But for the other direction of the iteration series, with iterates of the inverse function f^[-1] () we need the Carleman matrix developed at that fixpoint t_1 which is attracting for f^[-1](x) ... So with the correct adapation of the required two Carleman-matrices and their Neumann-series we reproduce correctly the iteration-series in question in both directions.          Gottfried Is there a connection between the Carlemann-matrix and the Schröder's equation, $\Psi(z)\;\;\;\Psi(f(z))=\lambda\cdot\Psi(z)$?  Here lambda is the derivative at the fixed point; $\lambda=f'(t_0)$, and then the iterated function g(x+1)= f(g(x)) can be generated from the inverse Schröder's equation:  $g(z)=t_0+\Psi^{-1}(\lambda^z)$ Does the solution to the Carlemann Matrix give you the power series for $\Psi^{-1}$? I would like a Matrix solution for the Schröder's equation.  I have a pari-gp program for the formal power series for both $\Psi,\;\;\Psi^{-1}$, iterating using Pari-gp's polynomials, but a Matrix solution would be easier to port over to a more accessible programming language and I thought maybe your Carlemann solution might be what I'm looking for - Sheldon Gottfried Ultimate Fellow Posts: 757 Threads: 116 Joined: Aug 2007 10/19/2017, 09:33 PM (This post was last modified: 10/23/2017, 11:56 PM by Gottfried.) (10/19/2017, 04:50 PM)sheldonison Wrote: (10/19/2017, 10:38 AM)Gottfried Wrote: ... 1) The Carleman-matrix is always based on the power series of a function f(x) and more specifically of a function g(x+t_0)-t_0 where t_0 is the attracting fixpoint for the function f(x). For that option the Carleman-matrix-based and the serial summation approach evaluate to the same value.              2) But for the other direction of the iteration series, with iterates of the inverse function f^[-1] () we need the Carleman matrix developed at that fixpoint t_1 which is attracting for f^[-1](x) ... So with the correct adapation of the required two Carleman-matrices and their Neumann-series we reproduce correctly the iteration-series in question in both directions.          Gottfried Is there a connection between the Carlemann-matrix and the Schröder's equation, $\Psi(z)\;\;\;\Psi(f(z))=\lambda\cdot\Psi(z)$?  Here lambda is the derivative at the fixed point; $\lambda=f'(t_0)$, and then the iterated function g(x+1)= f(g(x)) can be generated from the inverse Schröder's equation:  $g(z)=t_0+\Psi^{-1}(\lambda^z)$ Does the solution to the Carlemann Matrix give you the power series for $\Psi^{-1}$? I would like a Matrix solution for the Schröder's equation.  I have a pari-gp program for the formal power series for both $\Psi,\;\;\Psi^{-1}$, iterating using Pari-gp's polynomials, but a Matrix solution would be easier to port over to a more accessible programming language and I thought maybe your Carlemann solution might be what I'm looking for  Hi Sheldon - yes that connection is exceptionally simple. The Schröder-function is simply expressed by the eigenvector-matrices which occur by diagonalization of the Carleman-matrix for function f(x).                       In my notation,  with a Carlemanmatrix F for your function f(x) we have with a vector V(x) = [1,x,x^2,x^3,...]  $V(x) * F = V( f(x) )$ Then by diagonalization we find a solution in M and D such that $F = M * D * M^{-1}$ The software must take care, that the eigenvectors in M are correctly scaled, for instance in the triangular case, (where f(x) has no constant term) the diagonal in M is the diagonal unit matrix I  such that indeed M is in the Carleman-form.   (Using M=mateigen(F)  in Pari/GP does not suffice, you must scale the columns in M appropriately - I've built my own eigen-solver for triangular matrices which I can provide to you).                    Then we have $V(x) * F = V(x) * M * D * M^{-1} \\ = V(\Psi(x)) * D * M^{-1} \\ = V(\Psi (x)) * ^dV(\lambda) * M^{-1} \\ = V(\Psi (x) * \lambda) * M^{-1} \\ = V(\Psi^{-1} ( \lambda *\Psi (x)))\\$               We need here only to take attention for the problem, that non-triangular Carlemanmatrices of finite size - as they are only available to our software packages - give not the correct eigenvectors for the true power series of f(x). To learn about this it is best to use functions which have triangular Carleman-matrices, so for instance $f(x)=ax+b$  $f(x) = qx/(1+qx)$ or  $f(x) = t^x-1$ or the like where also the coefficient at the linear term is not zero and not 1.                For the non-triangular matrices, for instance for $f(x)=b^x$ the diagonalization gives only rough approximations to an -in some sense- "best-possible" solution for fractional iterations and its eigenvector-matrices are in general not Carleman or truncated Carleman. But they give nonetheless real-to-real solutions also for $b > \eta$ and seem to approximate the Kneser-solution when the size of the matrices increase.     You can have my Pari/GP-toolbox for the adequate handling of that type of matrices and especially for calculating the diagonalization for $t^x-1$ such that the eigenvectormatrices are of Carleman-type and true truncations of the \psi-powerseries for the Schröder-function (for which the builtin-eigensolver in Pari/GP does not take care). If you are interested it is perhaps better to contact me via email because the set of routines should have also some explanations with them and I expect some need for diadactical hints.
For a "preview" of that toolbox see perhaps page 21 ff in http://go.helms-net.de/math/tetdocs/Cont...ration.pdf which discusses the diagonalization for $t^x -1$ with its schroeder-function (and the "matrix-logarithm" method for the $e^x - 1$ and $\sin(x)$ functions which have no diagonalization in the case of finite size). Gottfried Helms, Kassel « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post (Again) fixpoint outside Period tommy1729 2 2,096 02/05/2017, 09:42 AM Last Post: tommy1729 Polygon cyclic fixpoint conjecture tommy1729 1 1,965 05/18/2016, 12:26 PM Last Post: tommy1729 The " outside " fixpoint ? tommy1729 0 1,353 03/18/2016, 01:16 PM Last Post: tommy1729 2 fixpoint pairs [2015] tommy1729 0 1,612 02/18/2015, 11:29 PM Last Post: tommy1729 [2014] The secondary fixpoint issue. tommy1729 2 3,306 06/15/2014, 08:17 PM Last Post: tommy1729 Simple method for half iterate NOT based on a fixpoint. tommy1729 2 3,066 04/30/2013, 09:33 PM Last Post: tommy1729 Iteration exercises: Lucas-Lehmer-test and Schröder-function Gottfried 0 2,562 04/04/2012, 06:17 AM Last Post: Gottfried Iteration series: Different fixpoints and iteration series (of an example polynomial) Gottfried 0 2,671 09/04/2011, 05:59 AM Last Post: Gottfried Fractional iteration of x^2+1 at infinity and fractional iteration of exp bo198214 10 15,537 06/09/2011, 05:56 AM Last Post: bo198214 2 fixpoint failure tommy1729 1 2,671 11/13/2010, 12:25 AM Last Post: tommy1729

Users browsing this thread: 1 Guest(s)