• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 Taylor polynomial. System of equations for the coefficients. marraco Fellow   Posts: 100 Threads: 12 Joined: Apr 2011 05/06/2015, 02:42 PM (This post was last modified: 05/06/2015, 02:52 PM by marraco.) (05/05/2015, 07:40 AM)Gottfried Wrote: P*A = A*Bb I think that we are speaking of different things. Obviously, there should be a way to demonstrate the equivalence of both, because they are trying to solve the same problem; looking for the same solution. But as I understand, the Carleman matrix A only contains powers of a_i coefficients, yet if you look at the red side, it cannot be written as a matrix product A*Bb, because it needs to have products of a_i coefficients (like ). Maybe it is a power of A.Bb, or something like A^Bb? The Pascal matrix on the blue side is the exponential of a much simpler matrix Maybe the equation can be greatly simplified by taking a logarithm of both sides. I have the result, but I do not yet know how to get it. Gottfried Ultimate Fellow     Posts: 787 Threads: 121 Joined: Aug 2007 05/06/2015, 04:17 PM (This post was last modified: 05/06/2015, 06:49 PM by Gottfried.) (05/06/2015, 02:42 PM)marraco Wrote: (05/05/2015, 07:40 AM)Gottfried Wrote: P*A = A*Bb I think that we are speaking of different things. Obviously, there should be a way to demonstrate the equivalence of both, because they are trying to solve the same problem; looking for the same solution. But as I understand, the Carleman matrix A only contains powers of a_i coefficients, yet if you look at the red side, it cannot be written as a matrix product A*Bb, because it needs to have products of a_i coefficients (like ). Maybe it is a power of A.Bb, or something like A^Bb? No, no ... In your convolution-formula you have in the inner of the double sum powers of powerseries (the red-colored formula in your first posting ) with the coefficients of the a()-function (not of its single coefficients), and if I decode this correctly, then this matches perfectly the composition of V(x)*A * Bb = (V(x)*A) * Bb = [1,a(x), a(x)^2, a(x)^3),...] * Bb = V(a(x))*Bb Only, that after removing of the left V(x)-vector we do things in different order: V(x)*A * Bb = V(x)*(A * Bb ) and I discuss that remaining matrix in the parenthese of the rhs. That V(x) can be removed on the rhs and on the lhs of the matrix-equation must be justified; if anywhere occur divergent series, this becomes difficult, but as far as we have nonzero intervals of convergence for all dot-products, this exploitation of associativity can be done /should be possible to be done (as far as I think). (The goal of this all is of course to improve computability of A, for instance by diagonalization of P or Bb and algebraic manipulations of the occuring matrix-factors). Anyway - I hope I didn't actually misread you (which is always possible given the lot of coefficients... ) Gottfried Gottfried Helms, Kassel marraco Fellow   Posts: 100 Threads: 12 Joined: Apr 2011 05/07/2015, 09:45 AM (This post was last modified: 05/07/2015, 10:29 AM by marraco.) I misinterpreted what the Carleman matrix was. I tough that it contained the powers of the derivatives of a function (valued at zero), but it contains the derivatives of the powers of a function, so it actually haves the products of the aᵢ coefficients (of bᵢ in your notation). ________________ I tried to use this method to find the coefficients for exponentiation: bˣ=Σbᵢ.xⁿ The condition is b.(x+1)=b.Σbᵢ.xⁿ which translates into P.[bᵢ]=b.[bᵢ] or [P-b.I].[bᵢ]=0 The solution should be bᵢ=ln(b)ⁱ / i! I found bᵢ=c. (ln(b)ⁱ/i!), where c is an arbitrary constant, because, obviously c.b⁽ˣ⁺¹⁾=b.(c.bˣ) I was bugged for the fact that any equation for solving tetration I tried seems to have at least one degree of liberty. I think now that it should be explained by one (at least) arbitrary constant in the solution. This looks analogous to constants found in the solution of differential equations, so I wonder if the evolvent of the curves generated by the constant is also a solution, and what is his meaning. I have the result, but I do not yet know how to get it. marraco Fellow   Posts: 100 Threads: 12 Joined: Apr 2011 01/13/2016, 04:32 AM (This post was last modified: 01/14/2016, 12:41 AM by marraco.) So, we want the vector , from the matrix equation: where "r" is the row index of the first matrix at left, and "i" his column index. Note that in the last equation, both r and i start counting from zero for the first row and column. ______________________________________ P(i) is the partition function The first few values of the partition function are (starting with p(0)=1): 1, 1, 2, 3, 5, 7, 11, 15, 22, 30, 42, 56, 77, 101, 135, 176, 231, 297, 385, 490, 627, 792, 1002, 1255, 1575, 1958, 2436, 3010, 3718, 4565, 5604, … (sequence A000041 in OEIS; the link has valuable information about the partition function). ______________________________________ is the number of repetitions of the integer j in the partition of the number i ______________________________________ Solving the equation If we do the substitution , we simplify the first equation to: ______________________________________ Special base. This equation suggest a special number, which is m=1.7632228343518967102252017769517070804... m is defined by For the base a=m, the equation gets simplified to: But let's forget about m for now. ______________________________________ We are now very close to the solution. The only obstacle remaining is the product: If we can do a substitution that get us rid of him, we have the solution: At this point we only need to substitute , where f is arbitrary, to get: ... and we get: The choice of f, very probably, determines the value for °a, and the branch of tetration. (01/03/2016, 11:24 PM)marraco Wrote: I have the result, but I do not yet know how to get it. marraco Fellow   Posts: 100 Threads: 12 Joined: Apr 2011 01/14/2016, 12:47 AM (This post was last modified: 01/14/2016, 12:58 AM by marraco.) ^^ Sorry. I made a big mistake. We cannot substitute of course. Maybe would work as an approximation, because we know that tends very rapidly to a line on logarithmic scale. Anyways, it would be of little use. We know that the are the derivatives of , so a Fourier or Laplace transform would turn the derivatives into products. But that would mess with the rest of the equation. I have the result, but I do not yet know how to get it. marraco Fellow   Posts: 100 Threads: 12 Joined: Apr 2011 01/14/2016, 01:22 AM (This post was last modified: 01/14/2016, 01:24 AM by marraco.) Here I make an expansion of a row, in hope that it helps somebody to digest the equation. (01/13/2016, 04:32 AM)marraco Wrote: We are now very close to the solution. The only obstacle remaining is the product: The product is what I called "the integer divisor" (05/03/2015, 04:35 AM)marraco Wrote: ^^ Here I expanded the row for i=9 of the equation: after the substitution : The problematic terms come from the factors . The q! divisors may emerge not from the term raised to q. q! could emerge from the absence of the other terms: . For example, the term is actually I have the result, but I do not yet know how to get it. marraco Fellow   Posts: 100 Threads: 12 Joined: Apr 2011 03/13/2016, 02:58 AM (01/13/2016, 04:32 AM)marraco Wrote: So, we want the vector , from the matrix equation: Thanks to Daniel advice, is easy to see that the red side can be derived as a direct application of Faà di Bruno's formula in the blue red equation, (04/30/2015, 03:24 AM)marraco Wrote: We want the coefficients aᵢ of this Taylor expansion: They should match this equation: is easy to see on the red side, that I have the result, but I do not yet know how to get it. Gottfried Ultimate Fellow     Posts: 787 Threads: 121 Joined: Aug 2007 08/23/2016, 11:25 AM (This post was last modified: 08/23/2016, 01:01 PM by Gottfried.) (05/01/2015, 01:57 AM)marraco Wrote: This is a numerical example for base a=e (...) So you get this systems of equations (blue to the left, and red to the right): Code:[1 1 1 1 1  1  1  1  1]    [ 1]   [e] [0 1 2 3 4  5  6  7  8]    [a₁]   [e.a₁] [0 0 1 3 6 10 15 21 28]    [a₂]   [e.a₂+e/2.a₁²] [0 0 0 1 4 10 20 35 56]    [a₃]   [e.a₃+e.a₁.a₂+e/6.a₁³] [0 0 0 0 1  5 15 35 70]  * [a₄] = [e.a₄+e/2.a₂²+e.a₃.a₁+e/2.a₂.a1²+e/24.a₁⁴] [0 0 0 0 0  1  6 21 56]    [a₅]   [...] [0 0 0 0 0  0  1  7 28]    [a₆]   [...] [0 0 0 0 0  0  0  1  8]    [a₇]   [...] [0 0 0 0 0  0  0  0  1]    [a₈]   [...] Quote:It is a non linear system of equations, and the solution for this particular case is: Code:a₀=    1,00000000000000000 a₁=    1,09975111049169000 a₂=    0,24752638354178700 a₃=    0,15046151104294100 a₄=    0,12170896032120000 a₅=    0,16084324512292400 a₆=    -0,02254254634348470 a₇=    -0,10318144159688800 a₈=    0,06371479195361670(...) This is perhaps a good starting point to explain the use of Carleman-matrices in my (Pari/GP-supported) matrix-toolbox, because you've just applied things analoguously to how I do this, only you didn' express it in matrix-formulae. To explain the basic idea of a Carlemanmatrix: consider a powerseries We express this in terms of the dot-product of two infinite-sized vectors where the column-vector A_1 contains the coefficients and the row-vector Now to make that idea valuable for function-composition /- iteration it would be good, if the output of such an operation were not simple a scalar, but of the same type ("vandermonde vector") as the input ( ). This leads to the idea of Carlemanmatrices: we just generate the vectors where the vector contains the coefficients for powers of f(x), such that ... in a matrix getting the operation: or Having this general idea we can fill our toolbox with Carlemanmatrices for the composition of functions for a fairly wide range of algebra. For instance the operation INC and its h'th iteration ADD is then only a problem of powers of P The operation MUL needs a diagonal vandermonde vector: The operation DEXP (= exp(x)-1) needs the matrix of Stirlingnumbers 2nd kind, similarity-scaled by factorials: and as an exercise, we see, that if we right-compose this with the INC -operation, we get the ordinary EXP operator, for which I give the matrix-name B: Of course, iterations of the EXP require then only powers of the matrix B. To see, that this is really useful, we need a lemma on the uniqueness of power-series. That is, in the new matrix-notation: If a function is continuous for a (even small) continuous range of the argument x, then the coefficients in A_1 are uniquely determined. That uniqueness of the coefficients in A_1 is the key, that we can look at the compositions of Carleman-matrices alone without respect of the notation with the dotproduct by V(x) and for instance, we can make use of the analysis of Carlemanmatrix-decompositions like and can analyze directly, for instance to arrive at the operation LOGP : log(1+x)
Now I relate this to that derivation which I've quoted from marraco's post. (05/01/2015, 01:57 AM)marraco Wrote: This is a numerical example for base a=e (...) So you get this systems of equations (blue to the left, and red to the right): Code:[1 1 1 1 1  1  1  1  1]    [ 1]   [e] [0 1 2 3 4  5  6  7  8]    [a₁]   [e.a₁] [0 0 1 3 6 10 15 21 28]    [a₂]   [e.a₂+e/2.a₁²] [0 0 0 1 4 10 20 35 56]    [a₃]   [e.a₃+e.a₁.a₂+e/6.a₁³] [0 0 0 0 1  5 15 35 70]  * [a₄] = [e.a₄+e/2.a₂²+e.a₃.a₁+e/2.a₂.a1²+e/24.a₁⁴] [0 0 0 0 0  1  6 21 56]    [a₅]   [...] [0 0 0 0 0  0  1  7 28]    [a₆]   [...] [0 0 0 0 0  0  0  1  8]    [a₇]   [...] [0 0 0 0 0  0  0  0  1]    [a₈]   [...] First we see the Pascalmatrix P on the lhs in action, then the coefficients of the Abel-function in the vector, say, A_1 . So the left hand is To make things smoother first, we assume A as complete Carlemanmatrix, expanded from A_1. If we "complete" that left hand side to discuss this in power series we have It is very likely, that the author wanted to derive the solution for the equation ; so we would have for the right hand side and indeed, expanding the terms using the matrixes as created in Pari/GP with, let's say size of 32x32 or 64x64 we get very nice approximations to that descriptions in the rhs of the quoted matrix-formula. What we can now do, depends on the above uniqueness-lemma: we can discard the V(x)-reference, just writing and looking at the second column of B only we get as shown in the quoted post. So indeed, that system of equations of the initial post is expressible by and the OP searches a solution for A.
While I've -at the moment- not yet a solution for A this way, we can, for instance, note that if A is invertible, then the equation can be made in a Jordan-form: which means, that B can be decomposed by similarity transformations into a triangular Jordan block, namely the Pascalmatrix - and having a Jordan-solver for finite matrix-sizes, one could try, whether increasing the matrix-sizes the Jordan-solutions converge to some limit-matrix A. For the alternative: looking at the "regular tetration" and the Schröder-function (including recentering the powerseries around the fixpoint) one gets a simple solution just by the diagonalization-formulae for triangular Carlemanmatrices which follow the same formal analysis using the "matrix-toolbox" which can, for finite size and numerical approximations, nicely be constructed using the matrix-features of Pari/GP. Gottfried Helms, Kassel « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post Arbitrary Order Transfer Equations JmsNxn 0 547 03/16/2021, 08:45 PM Last Post: JmsNxn New Quantum Algorithms (Carleman linearization) Finally Crack Nonlinear Equations Daniel 2 1,282 01/10/2021, 12:33 AM Last Post: marraco Moving between Abel's and Schroeder's Functional Equations Daniel 1 2,871 01/16/2020, 10:08 PM Last Post: sheldonison Taylor series of i[x] Xorter 12 22,508 02/20/2018, 09:55 PM Last Post: Xorter Taylor series of cheta Xorter 13 24,547 08/28/2016, 08:52 PM Last Post: sheldonison Totient equations tommy1729 0 3,247 05/08/2015, 11:20 PM Last Post: tommy1729 Bundle equations for bases > 2 tommy1729 0 3,300 04/18/2015, 12:24 PM Last Post: tommy1729 Grzegorczyk hierarchy vs Iterated differential equations? MphLee 0 3,465 01/03/2015, 11:02 PM Last Post: MphLee A system of functional equations for slog(x) ? tommy1729 3 7,931 07/28/2014, 09:16 PM Last Post: tommy1729 partial invariant equations ? tommy1729 0 3,170 03/16/2013, 12:32 AM Last Post: tommy1729

Users browsing this thread: 1 Guest(s) 