Thread Rating:
  • 1 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Iteration exercises: f(x)=x^2 - 0.5 ; Fixpoint-irritation...
#11
(06/04/2011, 01:13 PM)Gottfried Wrote: Sometimes we find easter-eggs even after easter...

For the alternating iteration-series


(definitions as copied and extended from previous post, see below)

we find a rational polynomial for p=4. That means




(maybe this is trivial and a telescoping sum only, didn't check this thorough)
<hr>
Another one:

<hr>

Code:
\\ define function f(x) for forward iteration and g(x) for backward iteration (=negative height)
\\(additional parameter h for positive integer heights is possible)
f(x,h=1) = for(k=1,h,x = x^2 - 0.5 ); return (x) ;
g(x,h=1) = for(k=1,h,x = sqrt(0.5 + x) ); return (x) ;

\\ do analysis at central value for alternating sums x0=1
x = 1.0
sp(x) = sumalt(h=0,(-1)^h * f(x , h))
sn(x) = sumalt(h=0,(-1)^h * g(x , h))
y(x) = sp(x) + sn(x) - x

this is not my expertise ... yet.

but i think i have seen those before in some far past.

for starters , i related your sums to equations of type f(x) = f(g(x)).

also , ergodic theory studies averages of type

F(x) = lim n-> oo 1/n (f^[0](x) + f^[1](x) + ... f^[n](x).)

hidden telescoping can indeed occur.

and sometimes we can rewrite to an integral.

but again , this is not my expertise yet.

you gave me extra question instead of an answer :p

in particular i do not understand your matrix idea in this thread.

my guess is that when you start at 1.0 , you use carleman matrices to compute the sum and one carleman matrix will not converge ( lies outside the radius ) for 1.0 ; so one is wrong and the other is not.

talking about alternating series 1/2 -1/3 + 1/5 -1/7 + 1/11 - ...

i believe this has a closed form/name and if i recall correctly its called the first mertens constant ...

there was something else i wanted to say ... forgot :s

edit : i do not know how to rewrite an average as a sum or superfunction ( do know integral and perhaps infinite product )... i say that because it might be usefull to see the link with the " ergodic average " ( or whatever its called ).

it bothers me , i wanna get rid of this " lim **/n " term for averages. ( might also be of benefit for number theory and statistics )
Reply
#12
(06/04/2011, 09:43 PM)tommy1729 Wrote: in particular i do not understand your matrix idea in this thread.
You may look at alternating sum of iterates (here: of exponential function) There I describe the method first time however with another function as basis: the exponential function.

The problem of convergence of series of matrices surfaces, and the question of convergence of the shortcutformula for the geometric series especially.

Nearly everything was completely new for me, so this article should be rewritten; anyway in its naivety it might be a good introductory impulse to understand the key idea for that matrix-method and possibly engage in the area which I call now "iteration-series" in resemblance to "powerseries" and "dirichletseries".

Gottfried
Gottfried Helms, Kassel
Reply
#13
Looking back at the article on the alternating iteration-series of exponential there was some confirmation for the matrix-based method missing. While I could use the serial summation (Abel- or Eulersummation of the explicite iterates) for the crosscheck of the matrix-method for the bases, where the powertower of infinite height converges, I could not do that for the other bases due to too fast growth of terms/iterated exponentials.

But well, if I take the (complex) fixpoint t as initial value, then the alternating series reduces to , which should be meaningful for each base, whether its exponential fixpoint is real or not.

With this I have now (at least) one check-value by serial summation for the comparision with the matrix-method.

The matrix-method, dimension 32x32, for instance for base e, which has a divergent iteration-series, comes out near the expected result to three/four digits and the same was true for the conjugate of t. If the convergence could be accelerated, then this gives another confirmation for the applicability of this method for the iteration-series.
Gottfried Helms, Kassel
Reply
#14
(03/03/2009, 12:15 PM)Gottfried Wrote: serial summation
0.709801988103 towards 2'nd fixpoint:
0.419756033790 towards 1'st fixpoint:

Matrix-method:
0.580243966210 towards 2'nd fixpoint // incorrect, doesn't match serial summation
0.419756033790 towards 1'st fixpoint // matches serial summation

a reason might be this : the vandermonde matrix must have a determinant <> 1 for almost all functions.

hence the determinant of f^h(x) and f^-h(x) cannot both satisfy to be in the radius ( determinant < 1 = within radius 1 ) for (1 - A)^-1.

basicly just taylor series radius argument for matrices.

have you considered this ?

if i am correct about that , the question becomes : what if the determinant of f(x) is 1 ? will the matrix method agree on both fixpoints ?
Reply
#15
(06/05/2011, 01:45 PM)tommy1729 Wrote: if i am correct about that , the question becomes : what if the determinant of f(x) is 1 ? will the matrix method agree on both fixpoints ?

How do you compute or at least estimate the determinant of an (infinite sized) Carleman-matrix (as simply transposed of "matrix-operators")?

Gottfried
Gottfried Helms, Kassel
Reply
#16
ive noticed we used both the terms vandermonde and carleman matrix.

ofcourse its carleman matrix and not vandermonde !

also note that the 2 matrix-method number must sum to 1 !!

0.580243966210
+
0.41975603379

=0.9999999999 = 1

simply because 1/(1+x) + 1/(1+(1/x)) = 1.

- which also shows the importance of the determinant !! -

because of this sum = 1 , the matrix methods cannot match the serial summation.(*)

this is similar to my determinant argument made before , just an equivalent restatement.

* the sum of both serials is related to the equation f(g(x)) = f(x) , whereas the sum of matrix methods just gives 1 for all x.
Reply
#17
(06/06/2011, 11:01 AM)tommy1729 Wrote: 0.580243966210
+
0.41975603379

=0.9999999999 = 1

simply because 1/(1+x) + 1/(1+(1/x)) = 1.


Yes, that observation was exactly what I was discussing when I presented these considerations here since 2007; especially I had a conversation with Andy on this. The next step which is obviously to do, is to search for the reason why powerseries-based methods disagree with the serial summation - and always only one of the results.
And then possibly for some adaption/cure, so that the results can be made matching. For instance, Ramanujan-summation for divergent series includes one integral term to correct for the change-of-order-of-summation which is an internal detail in that summation method, possibly we should find something analoguous here.

Quote:also note that the 2 matrix-method number must sum to 1 !!

- which also shows the importance of the determinant !! -
Thank you for the double exclamation. They don't introduce a determinant of an infinite sized matrix but make much noise, which I do not like as you know from earlier conversations of mine in sci.math. So I'll stop that small conversation on your postings here as I don't have to say much more relevant at the moment for the other occasional and interested reader.

Gottfried
Gottfried Helms, Kassel
Reply
#18
(06/06/2011, 12:47 PM)Gottfried Wrote:
(06/06/2011, 11:01 AM)tommy1729 Wrote: 0.580243966210
+
0.41975603379

=0.9999999999 = 1

simply because 1/(1+x) + 1/(1+(1/x)) = 1.
     

Yes, that observation was exactly what I was discussing when I presented these considerations here since 2007; especially I had a conversation with Andy on this. The next step which is obviously to do, is to search for the reason why powerseries-based methods disagree with the serial summation - and always only one of the results.    
(...)

It should be mentioned also in this thread, that the reason for this problem of matching the Carleman-based and the simple serial summation based results is simple and simple correctable.

1) The Carleman-matrix is always based on the power series of a function f(x) and more specifically of a function g(x+t_0)-t_0 where t_0 is the attracting fixpoint for the function f(x). For that option the Carleman-matrix-based and the serial summation approach evaluate to the same value.             

2) But for the other direction of the iteration series, with iterates of the inverse function f^[-1] () we need the Carleman matrix developed at that fixpoint t_1 which is attracting for f^[-1](x) and do the Neumann-series then of this Carlemanmatrix. This evaluates then again correctly and in concordance with the series summation. (Of course, "serial summation" means always to possibly include Cesaro or Euler summation or the like).       

So with the correct adapation of the required two Carleman-matrices and their Neumann-series we reproduce correctly the iteration-series in question in both directions.         

Gottfried
Gottfried Helms, Kassel
Reply
#19
(10/19/2017, 10:38 AM)Gottfried Wrote: ...
1) The Carleman-matrix is always based on the power series of a function f(x) and more specifically of a function g(x+t_0)-t_0 where t_0 is the attracting fixpoint for the function f(x). For that option the Carleman-matrix-based and the serial summation approach evaluate to the same value.             

2) But for the other direction of the iteration series, with iterates of the inverse function f^[-1] () we need the Carleman matrix developed at that fixpoint t_1 which is attracting for f^[-1](x) ...

So with the correct adapation of the required two Carleman-matrices and their Neumann-series we reproduce correctly the iteration-series in question in both directions.         

Gottfried

Is there a connection between the Carlemann-matrix and the Schröder's equation, ?  Here lambda is the derivative at the fixed point; , and then the iterated function g(x+1)= f(g(x)) can be generated from the inverse Schröder's equation: 

Does the solution to the Carlemann Matrix give you the power series for ?
I would like a Matrix solution for the Schröder's equation.  I have a pari-gp program for the formal power series for both , iterating using Pari-gp's polynomials, but a Matrix solution would be easier to port over to a more accessible programming language and I thought maybe your Carlemann solution might be what I'm looking for Smile
- Sheldon
Reply
#20
(10/19/2017, 04:50 PM)sheldonison Wrote:
(10/19/2017, 10:38 AM)Gottfried Wrote: ...
1) The Carleman-matrix is always based on the power series of a function f(x) and more specifically of a function g(x+t_0)-t_0 where t_0 is the attracting fixpoint for the function f(x). For that option the Carleman-matrix-based and the serial summation approach evaluate to the same value.             

2) But for the other direction of the iteration series, with iterates of the inverse function f^[-1] () we need the Carleman matrix developed at that fixpoint t_1 which is attracting for f^[-1](x) ...

So with the correct adapation of the required two Carleman-matrices and their Neumann-series we reproduce correctly the iteration-series in question in both directions.         

Gottfried

Is there a connection between the Carlemann-matrix and the Schröder's equation, ?  Here lambda is the derivative at the fixed point; , and then the iterated function g(x+1)= f(g(x)) can be generated from the inverse Schröder's equation: 

Does the solution to the Carlemann Matrix give you the power series for ?
I would like a Matrix solution for the Schröder's equation.  I have a pari-gp program for the formal power series for both , iterating using Pari-gp's polynomials, but a Matrix solution would be easier to port over to a more accessible programming language and I thought maybe your Carlemann solution might be what I'm looking for Smile

 Hi Sheldon - yes that connection is exceptionally simple. The Schröder-function is simply expressed by the eigenvector-matrices which occur by diagonalization of the Carleman-matrix for function f(x).                      

In my notation,  with a Carlemanmatrix F for your function f(x) we have with a vector V(x) = [1,x,x^2,x^3,...] 


Then by diagonalization we find a solution in M and D such that

The software must take care, that the eigenvectors in M are correctly scaled, for instance in the triangular case, (where f(x) has no constant term) the diagonal in M is the diagonal unit matrix I  such that indeed M is in the Carleman-form.   (Using M=mateigen(F)  in Pari/GP does not suffice, you must scale the columns in M appropriately - I've built my own eigen-solver for triangular matrices which I can provide to you).                   

Then we have

              

We need here only to take attention for the problem, that non-triangular Carlemanmatrices of finite size - as they are only available to our software packages - give not the correct eigenvectors for the true power series of f(x). To learn about this it is best to use functions which have triangular Carleman-matrices, so for instance $f(x)=ax+b$  $f(x) = qx/(1+qx) $ or  $f(x) = t^x-1 $ or the like where also the coefficient at the linear term is not zero and not 1.               

For the non-triangular matrices, for instance for $f(x)=b^x$ the diagonalization gives only rough approximations to an -in some sense- "best-possible" solution for fractional iterations and its eigenvector-matrices are in general not Carleman or truncated Carleman. But they give nonetheless real-to-real solutions also for $b > \eta $ and seem to approximate the Kneser-solution when the size of the matrices increase.    

You can have my Pari/GP-toolbox for the adequate handling of that type of matrices and especially for calculating the diagonalization for $t^x-1$ such that the eigenvectormatrices are of Carleman-type and true truncations of the \psi-powerseries for the Schröder-function (for which the builtin-eigensolver in Pari/GP does not take care). If you are interested it is perhaps better to contact me via email because the set of routines should have also some explanations with them and I expect some need for diadactical hints.                
<hr>
For a "preview" of that toolbox see perhaps page 21 ff in http://go.helms-net.de/math/tetdocs/Cont...ration.pdf which discusses the diagonalization for $t^x -1$ with its schroeder-function (and the "matrix-logarithm" method for the $ e^x - 1$ and $ \sin(x)$ functions which have no diagonalization in the case of finite size).
Gottfried Helms, Kassel
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  (Again) fixpoint outside Period tommy1729 2 583 02/05/2017, 09:42 AM
Last Post: tommy1729
  Polygon cyclic fixpoint conjecture tommy1729 1 776 05/18/2016, 12:26 PM
Last Post: tommy1729
  The " outside " fixpoint ? tommy1729 0 567 03/18/2016, 01:16 PM
Last Post: tommy1729
  2 fixpoint pairs [2015] tommy1729 0 902 02/18/2015, 11:29 PM
Last Post: tommy1729
  [2014] The secondary fixpoint issue. tommy1729 2 1,749 06/15/2014, 08:17 PM
Last Post: tommy1729
  Simple method for half iterate NOT based on a fixpoint. tommy1729 2 1,856 04/30/2013, 09:33 PM
Last Post: tommy1729
  Iteration exercises: Lucas-Lehmer-test and Schröder-function Gottfried 0 1,865 04/04/2012, 06:17 AM
Last Post: Gottfried
  Iteration series: Different fixpoints and iteration series (of an example polynomial) Gottfried 0 1,783 09/04/2011, 05:59 AM
Last Post: Gottfried
  Fractional iteration of x^2+1 at infinity and fractional iteration of exp bo198214 10 10,298 06/09/2011, 05:56 AM
Last Post: bo198214
  2 fixpoint failure tommy1729 1 1,749 11/13/2010, 12:25 AM
Last Post: tommy1729



Users browsing this thread: 2 Guest(s)