![]() |
Taylor polynomial. System of equations for the coefficients. - Printable Version +- Tetration Forum (https://math.eretrandre.org/tetrationforum) +-- Forum: Tetration and Related Topics (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=1) +--- Forum: Mathematical and General Discussion (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=3) +--- Thread: Taylor polynomial. System of equations for the coefficients. (/showthread.php?tid=993) Pages:
1
2
|
RE: Taylor polinomial. System of equations for the coefficients. - marraco - 05/06/2015 (05/05/2015, 07:40 AM)Gottfried Wrote: P*A = A*Bb I think that we are speaking of different things. Obviously, there should be a way to demonstrate the equivalence of both, because they are trying to solve the same problem; looking for the same solution. But as I understand, the Carleman matrix A only contains powers of a_i coefficients, yet if you look at the red side, it cannot be written as a matrix product A*Bb, because it needs to have products of a_i coefficients (like The Pascal matrix on the blue side is the exponential of a much simpler matrix Maybe the equation can be greatly simplified by taking a logarithm of both sides. RE: Taylor polinomial. System of equations for the coefficients. - Gottfried - 05/06/2015 (05/06/2015, 02:42 PM)marraco Wrote:(05/05/2015, 07:40 AM)Gottfried Wrote: P*A = A*Bb No, no ... In your convolution-formula you have in the inner of the double sum powers of powerseries (the red-colored formula V(x)*A * Bb = (V(x)*A) * Bb = [1,a(x), a(x)^2, a(x)^3),...] * Bb = V(a(x))*Bb Only, that after removing of the left V(x)-vector we do things in different order: V(x)*A * Bb = V(x)*(A * Bb ) and I discuss that remaining matrix in the parenthese of the rhs. That V(x) can be removed on the rhs and on the lhs of the matrix-equation must be justified; if anywhere occur divergent series, this becomes difficult, but as far as we have nonzero intervals of convergence for all dot-products, this exploitation of associativity can be done /should be possible to be done (as far as I think). (The goal of this all is of course to improve computability of A, for instance by diagonalization of P or Bb and algebraic manipulations of the occuring matrix-factors). Anyway - I hope I didn't actually misread you (which is always possible given the lot of coefficients... ) Gottfried RE: Taylor polynomial. System of equations for the coefficients. - marraco - 05/07/2015 I misinterpreted what the Carleman matrix was. I tough that it contained the powers of the derivatives of a function (valued at zero), but it contains the derivatives of the powers of a function, so it actually haves the products of the aᵢ coefficients (of bᵢ in your notation). ________________ I tried to use this method to find the coefficients for exponentiation: bˣ=Σbᵢ.xⁿ The condition is b.(x+1)=b.Σbᵢ.xⁿ which translates into P.[bᵢ]=b.[bᵢ] or [P-b.I].[bᵢ]=0 The solution should be bᵢ=ln(b)ⁱ / i! I found bᵢ=c. (ln(b)ⁱ/i!), where c is an arbitrary constant, because, obviously c.b⁽ˣ⁺¹⁾=b.(c.bˣ) I was bugged for the fact that any equation for solving tetration I tried seems to have at least one degree of liberty. I think now that it should be explained by one (at least) arbitrary constant in the solution. This looks analogous to constants found in the solution of differential equations, so I wonder if the evolvent of the curves generated by the constant is also a solution, and what is his meaning. RE: Taylor polynomial. System of equations for the coefficients. - marraco - 01/13/2016 So, we want the vector where "r" is the row index of the first matrix at left, and "i" his column index. Note that in the last equation, both r and i start counting from zero for the first row and column. ______________________________________ P(i) is the partition function The first few values of the partition function are (starting with p(0)=1): 1, 1, 2, 3, 5, 7, 11, 15, 22, 30, 42, 56, 77, 101, 135, 176, 231, 297, 385, 490, 627, 792, 1002, 1255, 1575, 1958, 2436, 3010, 3718, 4565, 5604, … (sequence A000041 in OEIS; the link has valuable information about the partition function). ______________________________________ ______________________________________ Solving the equation If we do the substitution ______________________________________ Special base. This equation suggest a special number, which is m=1.7632228343518967102252017769517070804... m is defined by For the base a=m, the equation gets simplified to: But let's forget about m for now. ______________________________________ We are now very close to the solution. The only obstacle remaining is the product: If we can do a substitution that get us rid of him, we have the solution: At this point we only need to substitute ... and we get: The choice of f, very probably, determines the value for °a, and the branch of tetration. (01/03/2016, 11:24 PM)marraco Wrote: RE: Taylor polynomial. System of equations for the coefficients. - marraco - 01/14/2016 ^^ Sorry. I made a big mistake. We cannot substitute Maybe We know that the RE: Taylor polinomial. System of equations for the coefficients. - marraco - 01/14/2016 Here I make an expansion of a row, in hope that it helps somebody to digest the equation. (01/13/2016, 04:32 AM)marraco Wrote: We are now very close to the solution. The only obstacle remaining is the product: The product is what I called "the integer divisor" (05/03/2015, 04:35 AM)marraco Wrote:^^ Here I expanded the row for i=9 of the equation: after the substitution The problematic terms come from the factors For example, the term RE: Taylor polynomial. System of equations for the coefficients. - marraco - 03/13/2016 (01/13/2016, 04:32 AM)marraco Wrote: So, we want the vector Thanks to Daniel advice, is easy to see that the red side can be derived as a direct application of Faà di Bruno's formula in the blue red equation, (04/30/2015, 03:24 AM)marraco Wrote: We want the coefficients aᵢ of this Taylor expansion:is easy to see on the red side, that RE: Taylor polinomial. System of equations for the coefficients. - Gottfried - 08/23/2016 (05/01/2015, 01:57 AM)marraco Wrote: This is a numerical example for base a=e Quote:It is a non linear system of equations, and the solution for this particular case is:This is perhaps a good starting point to explain the use of Carleman-matrices in my (Pari/GP-supported) matrix-toolbox, because you've just applied things analoguously to how I do this, only you didn' express it in matrix-formulae. To explain the basic idea of a Carlemanmatrix: consider a powerseries We express this in terms of the dot-product of two infinite-sized vectors where the column-vector A_1 contains the coefficients Now to make that idea valuable for function-composition /- iteration it would be good, if the output of such an operation were not simple a scalar, but of the same type ("vandermonde vector") as the input ( This leads to the idea of Carlemanmatrices: we just generate the vectors Having this general idea we can fill our toolbox with Carlemanmatrices for the composition of functions for a fairly wide range of algebra. For instance the operation INC and its h'th iteration ADD is then only a problem of powers of P The operation MUL needs a diagonal vandermonde vector: The operation DEXP (= exp(x)-1) needs the matrix of Stirlingnumbers 2nd kind, similarity-scaled by factorials: and as an exercise, we see, that if we right-compose this with the INC -operation, we get the ordinary EXP operator, for which I give the matrix-name B: Of course, iterations of the EXP require then only powers of the matrix B. To see, that this is really useful, we need a lemma on the uniqueness of power-series. That is, in the new matrix-notation: If a function That uniqueness of the coefficients in A_1 is the key, that we can look at the compositions of Carleman-matrices alone without respect of the notation with the dotproduct by V(x) and for instance, we can make use of the analysis of Carlemanmatrix-decompositions like and can analyze directly, for instance to arrive at the operation LOGP : log(1+x) <hr> Now I relate this to that derivation which I've quoted from marraco's post. (05/01/2015, 01:57 AM)marraco Wrote: This is a numerical example for base a=e First we see the Pascalmatrix P on the lhs in action, then the coefficients To make things smoother first, we assume A as complete Carlemanmatrix, expanded from A_1. If we "complete" that left hand side to discuss this in power series we have It is very likely, that the author wanted to derive the solution for the equation and indeed, expanding the terms using the matrixes as created in Pari/GP with, let's say size of 32x32 or 64x64 we get very nice approximations to that descriptions in the rhs of the quoted matrix-formula. What we can now do, depends on the above uniqueness-lemma: we can discard the V(x)-reference, just writing So indeed, that system of equations of the initial post is expressible by and the OP searches a solution for A. <hr> While I've -at the moment- not yet a solution for A this way, we can, for instance, note that if A is invertible, then the equation can be made in a Jordan-form: which means, that B can be decomposed by similarity transformations into a triangular Jordan block, namely the Pascalmatrix - and having a Jordan-solver for finite matrix-sizes, one could try, whether increasing the matrix-sizes the Jordan-solutions converge to some limit-matrix A. For the alternative: looking at the "regular tetration" and the Schröder-function (including recentering the powerseries around the fixpoint) one gets a simple solution just by the diagonalization-formulae for triangular Carlemanmatrices which follow the same formal analysis using the "matrix-toolbox" which can, for finite size and numerical approximations, nicely be constructed using the matrix-features of Pari/GP. |