Posts: 1,491
Threads: 355
Joined: Feb 2009
06/04/2011, 09:43 PM
(This post was last modified: 06/04/2011, 10:26 PM by tommy1729.)
(06/04/2011, 01:13 PM)Gottfried Wrote: Sometimes we find eastereggs even after easter...
For the alternating iterationseries
(definitions as copied and extended from previous post, see below)
we find a rational polynomial for p=4. That means
(maybe this is trivial and a telescoping sum only, didn't check this thorough)
<hr>
Another one:
<hr>
Code: \\ define function f(x) for forward iteration and g(x) for backward iteration (=negative height)
\\(additional parameter h for positive integer heights is possible)
f(x,h=1) = for(k=1,h,x = x^2  0.5 ); return (x) ;
g(x,h=1) = for(k=1,h,x = sqrt(0.5 + x) ); return (x) ;
\\ do analysis at central value for alternating sums x0=1
x = 1.0
sp(x) = sumalt(h=0,(1)^h * f(x , h))
sn(x) = sumalt(h=0,(1)^h * g(x , h))
y(x) = sp(x) + sn(x)  x
this is not my expertise ... yet.
but i think i have seen those before in some far past.
for starters , i related your sums to equations of type f(x) = f(g(x)).
also , ergodic theory studies averages of type
F(x) = lim n> oo 1/n (f^[0](x) + f^[1](x) + ... f^[n](x).)
hidden telescoping can indeed occur.
and sometimes we can rewrite to an integral.
but again , this is not my expertise yet.
you gave me extra question instead of an answer :p
in particular i do not understand your matrix idea in this thread.
my guess is that when you start at 1.0 , you use carleman matrices to compute the sum and one carleman matrix will not converge ( lies outside the radius ) for 1.0 ; so one is wrong and the other is not.
talking about alternating series 1/2 1/3 + 1/5 1/7 + 1/11  ...
i believe this has a closed form/name and if i recall correctly its called the first mertens constant ...
there was something else i wanted to say ... forgot :s
edit : i do not know how to rewrite an average as a sum or superfunction ( do know integral and perhaps infinite product )... i say that because it might be usefull to see the link with the " ergodic average " ( or whatever its called ).
it bothers me , i wanna get rid of this " lim **/n " term for averages. ( might also be of benefit for number theory and statistics )
Posts: 787
Threads: 121
Joined: Aug 2007
(06/04/2011, 09:43 PM)tommy1729 Wrote: in particular i do not understand your matrix idea in this thread. You may look at alternating sum of iterates (here: of exponential function) There I describe the method first time however with another function as basis: the exponential function.
The problem of convergence of series of matrices surfaces, and the question of convergence of the shortcutformula for the geometric series especially.
Nearly everything was completely new for me, so this article should be rewritten; anyway in its naivety it might be a good introductory impulse to understand the key idea for that matrixmethod and possibly engage in the area which I call now "iterationseries" in resemblance to "powerseries" and "dirichletseries".
Gottfried
Gottfried Helms, Kassel
Posts: 787
Threads: 121
Joined: Aug 2007
06/05/2011, 11:40 AM
(This post was last modified: 06/05/2011, 12:35 PM by Gottfried.)
Looking back at the article on the alternating iterationseries of exponential there was some confirmation for the matrixbased method missing. While I could use the serial summation (Abel or Eulersummation of the explicite iterates) for the crosscheck of the matrixmethod for the bases, where the powertower of infinite height converges, I could not do that for the other bases due to too fast growth of terms/iterated exponentials.
But well, if I take the (complex) fixpoint t as initial value, then the alternating series reduces to , which should be meaningful for each base, whether its exponential fixpoint is real or not.
With this I have now (at least) one checkvalue by serial summation for the comparision with the matrixmethod.
The matrixmethod, dimension 32x32, for instance for base e, which has a divergent iterationseries, comes out near the expected result to three/four digits and the same was true for the conjugate of t. If the convergence could be accelerated, then this gives another confirmation for the applicability of this method for the iterationseries.
Gottfried Helms, Kassel
Posts: 1,491
Threads: 355
Joined: Feb 2009
(03/03/2009, 12:15 PM)Gottfried Wrote: serial summation
0.709801988103 towards 2'nd fixpoint:
0.419756033790 towards 1'st fixpoint:
Matrixmethod:
0.580243966210 towards 2'nd fixpoint // incorrect, doesn't match serial summation
0.419756033790 towards 1'st fixpoint // matches serial summation
a reason might be this : the vandermonde matrix must have a determinant <> 1 for almost all functions.
hence the determinant of f^h(x) and f^h(x) cannot both satisfy to be in the radius ( determinant < 1 = within radius 1 ) for (1  A)^1.
basicly just taylor series radius argument for matrices.
have you considered this ?
if i am correct about that , the question becomes : what if the determinant of f(x) is 1 ? will the matrix method agree on both fixpoints ?
Posts: 787
Threads: 121
Joined: Aug 2007
(06/05/2011, 01:45 PM)tommy1729 Wrote: if i am correct about that , the question becomes : what if the determinant of f(x) is 1 ? will the matrix method agree on both fixpoints ?
How do you compute or at least estimate the determinant of an (infinite sized) Carlemanmatrix (as simply transposed of "matrixoperators")?
Gottfried
Gottfried Helms, Kassel
Posts: 1,491
Threads: 355
Joined: Feb 2009
ive noticed we used both the terms vandermonde and carleman matrix.
ofcourse its carleman matrix and not vandermonde !
also note that the 2 matrixmethod number must sum to 1 !!
0.580243966210
+
0.41975603379
=0.9999999999 = 1
simply because 1/(1+x) + 1/(1+(1/x)) = 1.
 which also shows the importance of the determinant !! 
because of this sum = 1 , the matrix methods cannot match the serial summation.(*)
this is similar to my determinant argument made before , just an equivalent restatement.
* the sum of both serials is related to the equation f(g(x)) = f(x) , whereas the sum of matrix methods just gives 1 for all x.
Posts: 787
Threads: 121
Joined: Aug 2007
(06/06/2011, 11:01 AM)tommy1729 Wrote: 0.580243966210
+
0.41975603379
=0.9999999999 = 1
simply because 1/(1+x) + 1/(1+(1/x)) = 1.
Yes, that observation was exactly what I was discussing when I presented these considerations here since 2007; especially I had a conversation with Andy on this. The next step which is obviously to do, is to search for the reason why powerseriesbased methods disagree with the serial summation  and always only one of the results.
And then possibly for some adaption/cure, so that the results can be made matching. For instance, Ramanujansummation for divergent series includes one integral term to correct for the changeoforderofsummation which is an internal detail in that summation method, possibly we should find something analoguous here.
Quote:also note that the 2 matrixmethod number must sum to 1 !!
 which also shows the importance of the determinant !! 
Thank you for the double exclamation. They don't introduce a determinant of an infinite sized matrix but make much noise, which I do not like as you know from earlier conversations of mine in sci.math. So I'll stop that small conversation on your postings here as I don't have to say much more relevant at the moment for the other occasional and interested reader.
Gottfried
Gottfried Helms, Kassel
Posts: 787
Threads: 121
Joined: Aug 2007
10/19/2017, 10:38 AM
(This post was last modified: 10/19/2017, 10:40 AM by Gottfried.)
(06/06/2011, 12:47 PM)Gottfried Wrote: (06/06/2011, 11:01 AM)tommy1729 Wrote: 0.580243966210
+
0.41975603379
=0.9999999999 = 1
simply because 1/(1+x) + 1/(1+(1/x)) = 1.
Yes, that observation was exactly what I was discussing when I presented these considerations here since 2007; especially I had a conversation with Andy on this. The next step which is obviously to do, is to search for the reason why powerseriesbased methods disagree with the serial summation  and always only one of the results.
(...)
It should be mentioned also in this thread, that the reason for this problem of matching the Carlemanbased and the simple serial summation based results is simple and simple correctable.
1) The Carlemanmatrix is always based on the power series of a function f(x) and more specifically of a function g(x+t_0)t_0 where t_0 is the attracting fixpoint for the function f(x). For that option the Carlemanmatrixbased and the serial summation approach evaluate to the same value.
2) But for the other direction of the iteration series, with iterates of the inverse function f^[1] () we need the Carleman matrix developed at that fixpoint t_1 which is attracting for f^[1](x) and do the Neumannseries then of this Carlemanmatrix. This evaluates then again correctly and in concordance with the series summation. (Of course, "serial summation" means always to possibly include Cesaro or Euler summation or the like).
So with the correct adapation of the required two Carlemanmatrices and their Neumannseries we reproduce correctly the iterationseries in question in both directions.
Gottfried
Gottfried Helms, Kassel
Posts: 683
Threads: 24
Joined: Oct 2008
10/19/2017, 04:50 PM
(This post was last modified: 10/19/2017, 05:21 PM by sheldonison.)
(10/19/2017, 10:38 AM)Gottfried Wrote: ...
1) The Carlemanmatrix is always based on the power series of a function f(x) and more specifically of a function g(x+t_0)t_0 where t_0 is the attracting fixpoint for the function f(x). For that option the Carlemanmatrixbased and the serial summation approach evaluate to the same value.
2) But for the other direction of the iteration series, with iterates of the inverse function f^[1] () we need the Carleman matrix developed at that fixpoint t_1 which is attracting for f^[1](x) ...
So with the correct adapation of the required two Carlemanmatrices and their Neumannseries we reproduce correctly the iterationseries in question in both directions.
Gottfried
Is there a connection between the Carlemannmatrix and the Schröder's equation, ? Here lambda is the derivative at the fixed point; , and then the iterated function g(x+1)= f(g(x)) can be generated from the inverse Schröder's equation:
Does the solution to the Carlemann Matrix give you the power series for ?
I would like a Matrix solution for the Schröder's equation. I have a parigp program for the formal power series for both , iterating using Parigp's polynomials, but a Matrix solution would be easier to port over to a more accessible programming language and I thought maybe your Carlemann solution might be what I'm looking for
 Sheldon
Posts: 787
Threads: 121
Joined: Aug 2007
10/19/2017, 09:33 PM
(This post was last modified: 10/23/2017, 11:56 PM by Gottfried.)
(10/19/2017, 04:50 PM)sheldonison Wrote: (10/19/2017, 10:38 AM)Gottfried Wrote: ...
1) The Carlemanmatrix is always based on the power series of a function f(x) and more specifically of a function g(x+t_0)t_0 where t_0 is the attracting fixpoint for the function f(x). For that option the Carlemanmatrixbased and the serial summation approach evaluate to the same value.
2) But for the other direction of the iteration series, with iterates of the inverse function f^[1] () we need the Carleman matrix developed at that fixpoint t_1 which is attracting for f^[1](x) ...
So with the correct adapation of the required two Carlemanmatrices and their Neumannseries we reproduce correctly the iterationseries in question in both directions.
Gottfried
Is there a connection between the Carlemannmatrix and the Schröder's equation, ? Here lambda is the derivative at the fixed point; , and then the iterated function g(x+1)= f(g(x)) can be generated from the inverse Schröder's equation:
Does the solution to the Carlemann Matrix give you the power series for ?
I would like a Matrix solution for the Schröder's equation. I have a parigp program for the formal power series for both , iterating using Parigp's polynomials, but a Matrix solution would be easier to port over to a more accessible programming language and I thought maybe your Carlemann solution might be what I'm looking for
Hi Sheldon  yes that connection is exceptionally simple. The Schröderfunction is simply expressed by the eigenvectormatrices which occur by diagonalization of the Carlemanmatrix for function f(x).
In my notation, with a Carlemanmatrix F for your function f(x) we have with a vector V(x) = [1,x,x^2,x^3,...]
Then by diagonalization we find a solution in M and D such that
The software must take care, that the eigenvectors in M are correctly scaled, for instance in the triangular case, (where f(x) has no constant term) the diagonal in M is the diagonal unit matrix I such that indeed M is in the Carlemanform. (Using M=mateigen(F) in Pari/GP does not suffice, you must scale the columns in M appropriately  I've built my own eigensolver for triangular matrices which I can provide to you).
Then we have
We need here only to take attention for the problem, that nontriangular Carlemanmatrices of finite size  as they are only available to our software packages  give not the correct eigenvectors for the true power series of f(x). To learn about this it is best to use functions which have triangular Carlemanmatrices, so for instance $f(x)=ax+b$ $f(x) = qx/(1+qx) $ or $f(x) = t^x1 $ or the like where also the coefficient at the linear term is not zero and not 1.
For the nontriangular matrices, for instance for $f(x)=b^x$ the diagonalization gives only rough approximations to an in some sense "bestpossible" solution for fractional iterations and its eigenvectormatrices are in general not Carleman or truncated Carleman. But they give nonetheless realtoreal solutions also for $b > \eta $ and seem to approximate the Knesersolution when the size of the matrices increase.
You can have my Pari/GPtoolbox for the adequate handling of that type of matrices and especially for calculating the diagonalization for $t^x1$ such that the eigenvectormatrices are of Carlemantype and true truncations of the \psipowerseries for the Schröderfunction (for which the builtineigensolver in Pari/GP does not take care). If you are interested it is perhaps better to contact me via email because the set of routines should have also some explanations with them and I expect some need for diadactical hints.
<hr>
For a "preview" of that toolbox see perhaps page 21 ff in http://go.helmsnet.de/math/tetdocs/Cont...ration.pdf which discusses the diagonalization for $t^x 1$ with its schroederfunction (and the "matrixlogarithm" method for the $ e^x  1$ and $ \sin(x)$ functions which have no diagonalization in the case of finite size).
Gottfried Helms, Kassel
