Thread Rating:
  • 1 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Iteration exercises: f(x)=x^2 - 0.5 ; Fixpoint-irritation...
#21
(10/19/2017, 09:33 PM)Gottfried Wrote:
(10/19/2017, 04:50 PM)sheldonison Wrote:
(10/19/2017, 10:38 AM)Gottfried Wrote: ...
1) The Carleman-matrix is always based on the power series of a function f(x) and more specifically of a function g(x+t_0)-t_0 where t_0 is the attracting fixpoint for the function f(x). For that option the Carleman-matrix-based and the serial summation approach evaluate to the same value.             

2) But for the other direction of the iteration series, with iterates of the inverse function f^[-1] () we need the Carleman matrix developed at that fixpoint t_1 which is attracting for f^[-1](x) ...

So with the correct adapation of the required two Carleman-matrices and their Neumann-series we reproduce correctly the iteration-series in question in both directions.         

Gottfried

Is there a connection between the Carlemann-matrix and the Schröder's equation, ?  Here lambda is the derivative at the fixed point; , and then the iterated function g(x+1)= f(g(x)) can be generated from the inverse Schröder's equation: 

Does the solution to the Carlemann Matrix give you the power series for ?
I would like a Matrix solution for the Schröder's equation.  I have a pari-gp program for the formal power series for both , iterating using Pari-gp's polynomials, but a Matrix solution would be easier to port over to a more accessible programming language and I thought maybe your Carlemann solution might be what I'm looking for Smile

 Hi Sheldon - yes that connection is exceptionally simple. The Schröder-function is simply expressed by the eigenvector-matrices which occur by diagonalization of the Carleman-matrix for function f(x).                      

In my notation,  with a Carlemanmatrix F for your function f(x) we have with a vector V(x) = [1,x,x^2,x^3,...] 


Then by diagonalization we find a solution in M and D such that

The software must take care, that the eigenvectors in M are correctly scaled, for instance in the triangular case, (where f(x) has no constant term) the diagonal in M is the diagonal unit matrix I  such that indeed M is in the Carleman-form.   (Using M=mateigen(F)  in Pari/GP does not suffice, you must scale the columns in M appropriately - I've built my own eigen-solver for triangular matrices which I can provide to you).                   

Then we have

              

We need here only to take attention for the problem, that non-triangular Carlemanmatrices of finite size - as they are only available to our software packages - give not the correct eigenvectors for the true power series of f(x). To learn about this it is best to use functions which have triangular Carleman-matrices, so for instance $f(x)=ax+b$  $f(x) = qx/(1+qx) $ or  $f(x) = t^x-1 $ or the like where also the coefficient at the linear term is not zero and not 1.               

For the non-triangular matrices, for instance for $f(x)=b^x$ the diagonalization gives only rough approximations to an -in some sense- "best-possible" solution for fractional iterations and its eigenvector-matrices are in general not Carleman or truncated Carleman. But they give nonetheless real-to-real solutions also for $b > \eta $ and seem to approximate the Kneser-solution when the size of the matrices increase.    

You can have my Pari/GP-toolbox for the adequate handling of that type of matrices and especially for calculating the diagonalization for $t^x-1$ such that the eigenvectormatrices are of Carleman-type and true truncations of the \psi-powerseries for the Schröder-function (for which the builtin-eigensolver in Pari/GP does not take care). If you are interested it is perhaps better to contact me via email because the set of routines should have also some explanations with them and I expect some need for diadactical hints.                
<hr>
For a "preview" of that toolbox see perhaps page 21 ff in http://go.helms-net.de/math/tetdocs/Cont...ration.pdf which discusses the diagonalization for $t^x -1$ with its schroeder-function (and the "matrix-logarithm" method for the $ e^x - 1$ and $ \sin(x)$ functions which have no diagonalization in the case of finite size).

For example, here is the pari-gp program for the formal inverse schroeder function.  I don't know how to turn this into a matrix function, but not many programming languages support the powerful polyonomial functions that pari-gp has.

Code:
formalischroder(fx,n) = {
  local(lambda,i,j,z,f1t,f2t,ns,f1s);
  lambda = polcoeff(fx,1);
  f1t=x;
  i=2;
  while (i<=n,
    f1s=f1t;
    f1t=f1t+acoeff*x^i+O(x^(i+1));
    f2t=subst(f1t,x,lambda*x)-subst(fx+O(x^(i+1)),x,f1t);
    z = polcoeff(f2t, i);
    z = subst(z,acoeff,x);
    ns=-polcoeff(z,0)/polcoeff(z,1);
    f1t=f1s+ns*x^i;
    i++;
  );
  return(Pol(f1t));
}
fz1=x^2+(1-sqrt(3))*x;
lambda1=polcoeff(fz2,1);
fs1=formalischroder(fz2,20);
superfunction1(z)=subst(fs2,x,lambda2^z);
- Sheldon
Reply
#22
(10/20/2017, 06:00 PM)sheldonison Wrote: For example, here is the pari-gp program for the formal inverse schroeder function.  I don't know how to turn this into a matrix function, but not many programming languages support the powerful polyonomial functions that pari-gp has.

Code:
formalischroder(fx,n) = {
  local(lambda,i,j,z,f1t,f2t,ns,f1s);
  lambda = polcoeff(fx,1);
  f1t=x;
  i=2;
  while (i<=n,
    f1s=f1t;
    f1t=f1t+acoeff*x^i+O(x^(i+1));
    f2t=subst(f1t,x,lambda*x)-subst(fx+O(x^(i+1)),x,f1t);
    z = polcoeff(f2t, i);
    z = subst(z,acoeff,x);
    ns=-polcoeff(z,0)/polcoeff(z,1);
    f1t=f1s+ns*x^i;
    i++;
  );
  return(Pol(f1t));
}
fz1=x^2+(1-sqrt(3))*x;
lambda1=polcoeff(fz2,1);
fs1=formalischroder(fz2,20);
superfunction1(z)=subst(fs2,x,lambda2^z);

Sheldon - I find some unexplained terms: what is "acoeff" / how is this defined before it is queried for "f1t"?

the same with "fz2" in "lambda1=pol..." and "fs2" in "superfuncion...subst(fs2..."           

-------
Hmm, I replaced that unexplained by the best what I could assume   ("acoeff" as indeterminate first which shall be valued later by reference in "subst(...)")

Code:
\\ Sheldon's code::
fz1=x^2+(1-sqrt(3))*x
lambda1=polcoeff(fz1,1)
fs1=formalischroder(fz1,20)
superfunction1(z)=subst(fs1,x,lambda1^z)
This gives for me:
Code:
fs1 = x + 0.788675134595*x^2 + 4.64273441009*x^3 + 9.72047587679*x^4 + 51.2905072562*x^5 + O(x^6)  

so it seems the insertions of mine are possibly correct.                 

Now here is the computation with my matrix-toolbox:

Code:
fz1 = x^2 + (1- sqrt(3))*x            \\ use your function
F = mkCarleman ( polcoeffs(fz1,32) )  \\ procedure to make a carlemanmatrix from polynomial or series
F_eigen = tri_eigen(F)                \\ my diagonalization for triangular matrices gives M,D,M^-1 in
                                      \\ a result vector as components 2,3 and 4
Schr    = F_eigen[2][,2]              \\ put the coeffs for the Schröder function
                                      \\   from column 2 of M into separate vector                 
lam     = F_eigen[3][2]               \\ put the eigenvalue from D-component into variable "lam"
SchrInv = F_eigen[4][,2]              \\ put the coeffs for the Inverse Schröder function
                                      \\   from column 2 of M^-1 into separate vector                 
Ser(SchrInv) + O(x^6)                 \\ display inverse Schroeder function as a series
\\ ------
%77 = x + 0.788675134595*x^2 + 4.642734410*x^3 + 9.72047587679*x^4 + 51.2905072562*x^5 + O(x^6)
Gottfried Helms, Kassel
Reply
#23
(10/20/2017, 07:55 PM)Gottf ried Wrote:
(10/20/2017, 06:00 PM)sheldonison Wrote: For example, here is the pari-gp program for the formal inverse schroeder function.  I don't know how to turn this into a matrix function, but not many programming languages support the powerful polyonomial functions that pari-gp has.

Code:
formalischroder(fx,n) = {
  local(lambda,i,j,z,f1t,f2t,ns,f1s);
  lambda = polcoeff(fx,1);
  f1t=x;
  i=2;
  while (i<=n,
    f1s=f1t;
    f1t=f1t+acoeff*x^i+O(x^(i+1));
    f2t=subst(f1t,x,lambda*x)-subst(fx+O(x^(i+1)),x,f1t);
    z = polcoeff(f2t, i);
    z = subst(z,acoeff,x);
    ns=-polcoeff(z,0)/polcoeff(z,1);
    f1t=f1s+ns*x^i;
    i++;
  );
  return(Pol(f1t));
}
fz1=x^2+(1-sqrt(3))*x;
[size=small][font=Monaco, Consolas, Courier, monospace]lambda1=polcoeff(fz1,1);[/font][/size]
[size=small][font=Monaco, Consolas, Courier, monospace]fs1=formalischroder(fz1,20);[/font][/size]
superfunction1(z)=subst(fs2,x,lambda2^z);

Sheldon - I find some unexplained terms: what is "acoeff" / how is this defined before it is queried for "f1t"?

the same with "fz2" in "lambda1=pol..." and "fs2" in "superfuncion...subst(fs2..."
acoeff is just like an unknown; like "x" in the equations "x^3+2*x^2+3*x+1".  In the routine, Pari-gp will treat acoeff as an unknown unassigned variable just like "x", that happens to be the multiplier for the "i'th" coefficient we are trying to determine.  

The others are typos:  I was originally going to post both superfunctions, from both fixed points....  The other fixed point equation would be fz2=x^2+(1+sqrt(3))*x.  Depending on whether |lambda|>1 or <1, determines where the superfunction equation converges, either negative or positive values of .

Actually, the other fixed point also converges better, so that's what I meant to post.  Here superfunction1 converges pretty good for
Code:
fz1=x^2+(1+sqrt(3))*x;
lambda1=polcoeff(fz1,1);
fs1=formalischroder(fz1,20);
superfunction1(z)=subst(fs1,x,lambda1^z);
- Sheldon
Reply
#24
Ah, so I assumed well.    

Ok, here is the solution for the upper fixpoint:
Code:
fz2=x^2+(1+sqrt(3))*x       \\ using upper fixpoint  
F = mkCarleman ( polcoeffs(fz2,32) )  \\ carleman matrix of size 32x32 (power series will have 32 coeffs)
F_eigen = tri_eigen(F) ;
Ser(F_Eigen[4][,2])+O(x^6)   \\ inverse Schröder funktion at upper fixpoint
 -----------------
%82 = x + 0.2113248654*x^2 + 0.0239322565748*x^3 + 0.00174634543176*x^4 + 0.00009103437192*x^5 + O(x^6)
Added: here are some basic Pari/GP procedures used so far:
Code:
{ polcoeffs(lp, flgd=-2, flgNoConst=0) = local(llp, lpd, lv, lv1, maxd, oldps, s);
      maxd=if(flgd==-1|flgd==-2,n,flgd);
            oldps=default(seriesprecision,,1);  \\ save old series-precision and adjust for current query
            s=default(seriesprecision,maxd,1);

      llp=Pol(lp);lpd=poldegree(llp);    \\ convert input into formal POL
            s=default(seriesprecision,oldps,1);  \\ reset old series precision

      if(flgd==-2,maxd=max(0,lpd+1));
      lv=vector(maxd);
           for(k=0,min(lpd,maxd-1),lv[1+k]=polcoeff(llp,k));
      if(flgNoConst,lv[1]=0);      \\ make a true ZERO if constant should vanish

      return(lv); }

\\ polcoeffs(lp, flgd=-2,flgNoConst=0) : uses a scalar entry containing a
\\ polynomial, converts it into a vector of coefficients (calling implicitely "polcoeff()")
\\ flgd=-2: return poldegree+1 coefficients
\\ flgd=-1: return default (=n) coefficients
\\ flgd>=0: return flgd coefficients.


{ mkCarleman(lvec, serprec=n) = local(dim=length(lvec), tmpv, tmpser, tmpM, oldserprec);
              oldserprec=default(seriesprecision,,1);
              if(serprec<>oldserprec,default(seriesprecision,serprec,1));

     tmpv = Ser(lvec);
     tmpser = tmpv + O(x^dim);
     tmpM = matid(dim);
        for(c=2,dim,
             tmpM[,c]=polcoeffs(tmpser,dim)~ ;
             tmpser = tmpser*tmpv  + O(x^dim)
           );
     if(serprec<>oldserprec,default(seriesprecision,oldserprec,1));
     return(tmpM); }

\\ mkCarleman (vector-of-coefficients, seriesprecision = n)
\\ creates from a vector of coefficients (assumed to be of the power series of some function f(x))
\\ a matrix of the Carleman-type (transposed to the wikipedia-convention
\\ serprec : if dimension should other than order of supplied polynomial, give it in "serprec"


{ tri_eigen(U, dim=9999, numfmt=1) = local(u=U[2,2], UEW, UEWi, t=exp(u), uv);
       dim=max(1,min(rows(U),dim));

       uv=vectorv(dim,r,U[r,r]);
       UEW=numfmt*matid(dim);  \\ is also the Carlemanmatrix for Schröder-function
           for(c=1,dim-1,
                for(r=c+1,dim,
                      UEW[r,c]=sum(k=c,r-1,U[r,k]*UEW[k,c])/(uv[c]-uv[r])
              ));

       UEWi=numfmt*matid(dim); \\ UEWi shall become the inverse of UEW, is also the
                                      \\ Carleman-matrix of the inverse Schröder-function
       for(r=2,dim,
            forstep(c=r-1,1,-1,
                 UEWi[r,c]=sum(k=0,r-1-c,U[r-k,c]*UEWi[r,r-k])/(uv[r]-uv[c])
           ));

       return([[u,t,exp(u/t)],UEW,uv,UEWi]);}

\\ tri_eigen(U, dim=9999, numfmt=1) - performs diagonalization of lower triangular matrices
\\ if inputmatrix is of Carlemantype, then the Eigenmatrices are of the Carlemantype as well
\\ returns vector of [coeffs, M, D, M^-1] M being eigenvectors, D being diagonalmatrix of eigenvalues
\\                                        coeffs being [u=log(t), t, u/t = log(b)] where b=t^(1/t)
\\    diagonalization can also be performed if input matrix is Carleman and has parameter u only symbolic
\\
\\ dim : if 9999 use size of input matrix
\\     : if smaller use size of top-left sumbatrix
\\
\\ numfmt : 1 : return integer values if possible ; sometimes faster, sometimes slower than with real values
\\          1.0 : compute in float format (with stadard-precision in Pari/GP). If large factorials
\\                occur in large dimension matrices Ut then using numfmt=1.0 prevents gigantic integers
\\                or rational numbers with gigantic components.

{ fmt(realprec=-1, dispdigits=-1, flg=1) = local(oldp, oldf);
     oldp=precision(0.0);
     oldf=default(format,,1);
     if(realprec>-1,default(realprecision,realprec,1);oldp=realprec);
     if(dispdigits>-1,oldf=Str("g0.",dispdigits));
     oldf=default(format,oldf,1);
     if(flg,print("prec=",oldp," display=",oldf)); }
\\ fmt(200,12) sets Pari/GP real precision to 200 and display digits to 12
\\ if only one parameter should be changed, the other stays untouched. fmt(,8) ; fmt(400) etc


\\ additional remarks:
\\  I use the global variable "n" for default size for matrices and vectors. (This is "n"
\\  simply I'm an old statistician... )
\\  By default, I set in my session-initialization n=32 . With this size all standard
\\  matrix-constants are also precomputed to be permanently available in the session.
\\  If "n" should be changed either then the session-initialization should
\\  be recalled so the matrix-constants are recomputed too (if they are needed at all).
\\
\\  I've implemented the short routine "fmt(internal-precision,display-precision)" and
\\  initialize sessions with "fmt(200,12)" to have by default 200 dec digits precision for
\\  real-arithmetic. This can be changed at any time, but remember that possibly used
\\  real constant (like bases for exponentiation, computed fixpoints etc) should then
\\  recomputed as well.
\\
\\  The full set of routines for my (dynamical) matrix-toolbox-procedures is too large
\\  to be added here, I might send it on personal request.
Gottfried
Gottfried Helms, Kassel
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  (Again) fixpoint outside Period tommy1729 2 583 02/05/2017, 09:42 AM
Last Post: tommy1729
  Polygon cyclic fixpoint conjecture tommy1729 1 776 05/18/2016, 12:26 PM
Last Post: tommy1729
  The " outside " fixpoint ? tommy1729 0 567 03/18/2016, 01:16 PM
Last Post: tommy1729
  2 fixpoint pairs [2015] tommy1729 0 902 02/18/2015, 11:29 PM
Last Post: tommy1729
  [2014] The secondary fixpoint issue. tommy1729 2 1,749 06/15/2014, 08:17 PM
Last Post: tommy1729
  Simple method for half iterate NOT based on a fixpoint. tommy1729 2 1,856 04/30/2013, 09:33 PM
Last Post: tommy1729
  Iteration exercises: Lucas-Lehmer-test and Schröder-function Gottfried 0 1,865 04/04/2012, 06:17 AM
Last Post: Gottfried
  Iteration series: Different fixpoints and iteration series (of an example polynomial) Gottfried 0 1,783 09/04/2011, 05:59 AM
Last Post: Gottfried
  Fractional iteration of x^2+1 at infinity and fractional iteration of exp bo198214 10 10,298 06/09/2011, 05:56 AM
Last Post: bo198214
  2 fixpoint failure tommy1729 1 1,749 11/13/2010, 12:25 AM
Last Post: tommy1729



Users browsing this thread: 2 Guest(s)