Taylor polynomial. System of equations for the coefficients.  Printable Version + Tetration Forum (https://math.eretrandre.org/tetrationforum) + Forum: Tetration and Related Topics (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=1) + Forum: Mathematical and General Discussion (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=3) + Thread: Taylor polynomial. System of equations for the coefficients. (/showthread.php?tid=993) Pages:
1
2

RE: Taylor polinomial. System of equations for the coefficients.  marraco  05/06/2015 (05/05/2015, 07:40 AM)Gottfried Wrote: P*A = A*Bb I think that we are speaking of different things. Obviously, there should be a way to demonstrate the equivalence of both, because they are trying to solve the same problem; looking for the same solution. But as I understand, the Carleman matrix A only contains powers of a_i coefficients, yet if you look at the red side, it cannot be written as a matrix product A*Bb, because it needs to have products of a_i coefficients (like ). Maybe it is a power of A.Bb, or something like A^Bb? The Pascal matrix on the blue side is the exponential of a much simpler matrix Maybe the equation can be greatly simplified by taking a logarithm of both sides. RE: Taylor polinomial. System of equations for the coefficients.  Gottfried  05/06/2015 (05/06/2015, 02:42 PM)marraco Wrote:(05/05/2015, 07:40 AM)Gottfried Wrote: P*A = A*Bb No, no ... In your convolutionformula you have in the inner of the double sum powers of powerseries (the redcolored formula in your first posting ) with the coefficients of the a()function (not of its single coefficients), and if I decode this correctly, then this matches perfectly the composition of V(x)*A * Bb = (V(x)*A) * Bb = [1,a(x), a(x)^2, a(x)^3),...] * Bb = V(a(x))*Bb Only, that after removing of the left V(x)vector we do things in different order: V(x)*A * Bb = V(x)*(A * Bb ) and I discuss that remaining matrix in the parenthese of the rhs. That V(x) can be removed on the rhs and on the lhs of the matrixequation must be justified; if anywhere occur divergent series, this becomes difficult, but as far as we have nonzero intervals of convergence for all dotproducts, this exploitation of associativity can be done /should be possible to be done (as far as I think). (The goal of this all is of course to improve computability of A, for instance by diagonalization of P or Bb and algebraic manipulations of the occuring matrixfactors). Anyway  I hope I didn't actually misread you (which is always possible given the lot of coefficients... ) Gottfried RE: Taylor polynomial. System of equations for the coefficients.  marraco  05/07/2015 I misinterpreted what the Carleman matrix was. I tough that it contained the powers of the derivatives of a function (valued at zero), but it contains the derivatives of the powers of a function, so it actually haves the products of the aᵢ coefficients (of bᵢ in your notation). ________________ I tried to use this method to find the coefficients for exponentiation: bˣ=Σbᵢ.xⁿ The condition is b.(x+1)=b.Σbᵢ.xⁿ which translates into P.[bᵢ]=b.[bᵢ] or [Pb.I].[bᵢ]=0 The solution should be bᵢ=ln(b)ⁱ / i! I found bᵢ=c. (ln(b)ⁱ/i!), where c is an arbitrary constant, because, obviously c.b⁽ˣ⁺¹⁾=b.(c.bˣ) I was bugged for the fact that any equation for solving tetration I tried seems to have at least one degree of liberty. I think now that it should be explained by one (at least) arbitrary constant in the solution. This looks analogous to constants found in the solution of differential equations, so I wonder if the evolvent of the curves generated by the constant is also a solution, and what is his meaning. RE: Taylor polynomial. System of equations for the coefficients.  marraco  01/13/2016 So, we want the vector , from the matrix equation: where "r" is the row index of the first matrix at left, and "i" his column index. Note that in the last equation, both r and i start counting from zero for the first row and column. ______________________________________ P(i) is the partition function The first few values of the partition function are (starting with p(0)=1): 1, 1, 2, 3, 5, 7, 11, 15, 22, 30, 42, 56, 77, 101, 135, 176, 231, 297, 385, 490, 627, 792, 1002, 1255, 1575, 1958, 2436, 3010, 3718, 4565, 5604, … (sequence A000041 in OEIS; the link has valuable information about the partition function). ______________________________________ is the number of repetitions of the integer j in the partition of the number i ______________________________________ Solving the equation If we do the substitution , we simplify the first equation to: ______________________________________ Special base. This equation suggest a special number, which is m=1.7632228343518967102252017769517070804... m is defined by For the base a=m, the equation gets simplified to: But let's forget about m for now. ______________________________________ We are now very close to the solution. The only obstacle remaining is the product: If we can do a substitution that get us rid of him, we have the solution: At this point we only need to substitute , where f is arbitrary, to get: ... and we get: The choice of f, very probably, determines the value for °a, and the branch of tetration. (01/03/2016, 11:24 PM)marraco Wrote: RE: Taylor polynomial. System of equations for the coefficients.  marraco  01/14/2016 ^^ Sorry. I made a big mistake. We cannot substitute of course. Maybe would work as an approximation, because we know that tends very rapidly to a line on logarithmic scale. Anyways, it would be of little use. We know that the are the derivatives of , so a Fourier or Laplace transform would turn the derivatives into products. But that would mess with the rest of the equation. RE: Taylor polinomial. System of equations for the coefficients.  marraco  01/14/2016 Here I make an expansion of a row, in hope that it helps somebody to digest the equation. (01/13/2016, 04:32 AM)marraco Wrote: We are now very close to the solution. The only obstacle remaining is the product: The product is what I called "the integer divisor" (05/03/2015, 04:35 AM)marraco Wrote:^^ Here I expanded the row for i=9 of the equation: after the substitution : The problematic terms come from the factors . The q! divisors may emerge not from the term raised to q. q! could emerge from the absence of the other terms: . For example, the term is actually RE: Taylor polynomial. System of equations for the coefficients.  marraco  03/13/2016 (01/13/2016, 04:32 AM)marraco Wrote: So, we want the vector , from the matrix equation: Thanks to Daniel advice, is easy to see that the red side can be derived as a direct application of Faà di Bruno's formula in the blue red equation, (04/30/2015, 03:24 AM)marraco Wrote: We want the coefficients aᵢ of this Taylor expansion:is easy to see on the red side, that RE: Taylor polinomial. System of equations for the coefficients.  Gottfried  08/23/2016 (05/01/2015, 01:57 AM)marraco Wrote: This is a numerical example for base a=e Quote:It is a non linear system of equations, and the solution for this particular case is:This is perhaps a good starting point to explain the use of Carlemanmatrices in my (Pari/GPsupported) matrixtoolbox, because you've just applied things analoguously to how I do this, only you didn' express it in matrixformulae. To explain the basic idea of a Carlemanmatrix: consider a powerseries We express this in terms of the dotproduct of two infinitesized vectors where the columnvector A_1 contains the coefficients and the rowvector Now to make that idea valuable for functioncomposition / iteration it would be good, if the output of such an operation were not simple a scalar, but of the same type ("vandermonde vector") as the input ( ). This leads to the idea of Carlemanmatrices: we just generate the vectors where the vector contains the coefficients for powers of f(x), such that ... in a matrix getting the operation: or Having this general idea we can fill our toolbox with Carlemanmatrices for the composition of functions for a fairly wide range of algebra. For instance the operation INC and its h'th iteration ADD is then only a problem of powers of P The operation MUL needs a diagonal vandermonde vector: The operation DEXP (= exp(x)1) needs the matrix of Stirlingnumbers 2nd kind, similarityscaled by factorials: and as an exercise, we see, that if we rightcompose this with the INC operation, we get the ordinary EXP operator, for which I give the matrixname B: Of course, iterations of the EXP require then only powers of the matrix B. To see, that this is really useful, we need a lemma on the uniqueness of powerseries. That is, in the new matrixnotation: If a function is continuous for a (even small) continuous range of the argument x, then the coefficients in A_1 are uniquely determined. That uniqueness of the coefficients in A_1 is the key, that we can look at the compositions of Carlemanmatrices alone without respect of the notation with the dotproduct by V(x) and for instance, we can make use of the analysis of Carlemanmatrixdecompositions like and can analyze directly, for instance to arrive at the operation LOGP : log(1+x) <hr> Now I relate this to that derivation which I've quoted from marraco's post. (05/01/2015, 01:57 AM)marraco Wrote: This is a numerical example for base a=e First we see the Pascalmatrix P on the lhs in action, then the coefficients of the Abelfunction in the vector, say, A_1 . So the left hand is To make things smoother first, we assume A as complete Carlemanmatrix, expanded from A_1. If we "complete" that left hand side to discuss this in power series we have It is very likely, that the author wanted to derive the solution for the equation ; so we would have for the right hand side and indeed, expanding the terms using the matrixes as created in Pari/GP with, let's say size of 32x32 or 64x64 we get very nice approximations to that descriptions in the rhs of the quoted matrixformula. What we can now do, depends on the above uniquenesslemma: we can discard the V(x)reference, just writing and looking at the second column of B only we get as shown in the quoted post. So indeed, that system of equations of the initial post is expressible by and the OP searches a solution for A. <hr> While I've at the moment not yet a solution for A this way, we can, for instance, note that if A is invertible, then the equation can be made in a Jordanform: which means, that B can be decomposed by similarity transformations into a triangular Jordan block, namely the Pascalmatrix  and having a Jordansolver for finite matrixsizes, one could try, whether increasing the matrixsizes the Jordansolutions converge to some limitmatrix A. For the alternative: looking at the "regular tetration" and the Schröderfunction (including recentering the powerseries around the fixpoint) one gets a simple solution just by the diagonalizationformulae for triangular Carlemanmatrices which follow the same formal analysis using the "matrixtoolbox" which can, for finite size and numerical approximations, nicely be constructed using the matrixfeatures of Pari/GP. 