Applying the iteration methods to simpler functions  Printable Version + Tetration Forum (https://math.eretrandre.org/tetrationforum) + Forum: Tetration and Related Topics (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=1) + Forum: Mathematical and General Discussion (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=3) + Thread: Applying the iteration methods to simpler functions (/showthread.php?tid=84) Pages:
1
2

Applying the iteration methods to simpler functions  bo198214  11/05/2007 We basicly have 3 methods to compute for a given analytic function . This is the natural Abel function, as a generalization of Andrew's slog, then it is the matrix operator method as favoured by Gottfried and there is the old regular iteration method. While the method of natural Abel function works only for developments at nonfixed points, the matrix operator method works for developments on fixed points and on nonfixed points and the regular iteration method works only at fixed points. The matrix operator method and the regular iteration method coincide on fixed points. Here I want to investigate the application of these methods to the lower operations addition, multiplication and power operation. More precisely what is the outcome of the iteration of , of and of as supplement to the already much discussed iteration of . We would expect the methods to yield the following results, lets denote "the" Abel function of by , which is the inverse of the function and is determined up to an additive constant. Vice versa .
Let us start in this post with the investigation of the simplest case 1, . This has no fixed points for so we apply the natural Abel function method and the matrix operator method. In both cases we need the Carlemanmatrix, which has in its th column the coefficients of the th power of . The th power is . So the Ntruncated Carlemanmatrix is , . For example with : . The natural Abel method Let be the matrix without the first column and without the last row which has N rows and columns and subtract the Identity matrix, this gives for : . To retrieve the natural Abel function we have to solve the equation system . We can easily backtrace that and that . Hence is plus and we have verified our claim as this is valid for arbitrarily large N. The matrix operator method To compute we can simply apply here . The first (not 0th) column of are then the coefficients of . But we see that the first column of has all entries 0 for . Hence only two terms in the above infinite sum contributes to the first column, this is and . The first column of the first one is and of the second one is . Hence and so fullfilling our claim too. Hopefully I will continue somewhen with cases 2. and 3. . RE: Applying the iteration methods to simpler functions  Gottfried  11/06/2007 You may also have a look at my operatorsarticle... I discussed the three operations in connection with the matrixmethod in few lines at the second last page. See here:http://math.eretrandre.org/tetrationforum/attachment.php?aid=124 hmm. Don't know, seems, there is something broken with the pdfattachments. Here is an external link [update] Ahhh,... and the example with the carlemanmatrix is so simple and short, that even I got its idea now. [/update] Gottfried RE: Applying the iteration methods to simpler functions  andydude  11/06/2007 @Henryk Do you consider parabolic/hyperbolic iteration (and Daniel Geisler's methods) to be part of regular iteration? Andrew Robbins RE: Applying the iteration methods to simpler functions  bo198214  11/06/2007 Gottfried Wrote:You may also have a look at my operatorsarticle... I discussed the three operations in connection with the matrixmethod in few lines at the second last page. See here:http://math.eretrandre.org/tetrationforum/attachment.php?aid=124The link works well. You write in this article: Quote:The eigensystem of P is degenerate; but it has an exceptional simple matrixlogarithm, by which then a general power can be easily computed when just multiplied with the hparameter. There is even a general method that works with every matrix via the Jordan normal form. However it suffices to use a more relaxed form where the blocks of the Jordan normal form consist of upper triangular matrices with the eigenvalue on the diagonal (instead having the eigenvalue on the diagonal and 1 on the diagonal above them, see wikipedia). We can take the power of such a block by the formula . Which however involves no limits for the entries. And the th power of the whole matrix, which consists of several such blocks on the diagonal, is simply the th power of each block on the diagonal. The matrix in the considered case above consisted of just one such block with eigenvalue . andydude Wrote:Do you consider parabolic/hyperbolic iteration (and Daniel Geisler's methods) to be part of regular iteration?Yes I consider both as regular iteration (though if directly handled they require different treatment) but both coincide with the matrix operator method if developed at the fixed point. RE: Applying the iteration methods to simpler functions  bo198214  11/08/2007 Let us continue with the 2nd case with the expected iteration and the expected Abel function . This function has one (and only one) fixed point at 0, so here we can directly apply the regular iteration (which we couldnt for our previous function , which had no fixed points) and the matrix operator method. However for the natural Abel function we must consider a development at a nonfixed point. Iterative regular iteration We compute the principal Schroeder function by . The principal Abel function is . Matrix operator method/formal powerseries iteration The Bell matrix of has in the th row the coefficients of the th power of , i.e. , which means it consists of 0's except on the diagonal it has the entries . . This however is already in the diagonal form and the th power of consists of on the diagonal and 0 otherwhere. The first row contains the coefficients of the th iterate, so according to our assumption. Natural Abel method First we have to choose a nonfixed point of development, say . Then we form transposed conjugate , knowing that then , we would expect that . For simplification write . The powers of are hence the Carleman matrix G of g has the entries this is a upper triangular matrix. Truncated to 4x4: , subtracting the identity matrix and first column and last row removed, we have to solve: For a moment let the development point , then . And we can line by line transform the above equation system into a triangular one, first subtracting the first line from the second line then . And we subtract the second line from the third: This scheme can arbitrarily continued. We can now subtract the third line from the forth, because and so on. Generally this would give us a triangular matrix . Hm, does the solution of this system converge to , more specifically is for , does it converge at all? My numerical computations show that it is roughly (up to precision) the natural logarithm. However it seems not really to converge, or is it just to slow? Perhaps one has to derive it from the formulas without numerics in some quiet hours. RE: Applying the iteration methods to simpler functions  jaydfox  11/09/2007 I would not fret if it converges slowly. For example, the slog solution with (EI)*f=1 converges very slowly, with terms being inaccurate by 10% or more after perhaps the first 1/4th or 1/3rd of the series. For example, for the 200x200 solution, only the first 75 or so terms are accurate to within 20%, only 60 or so to within 10%. And only the first 35 or so terms are accurate to within 1%. The first 20 or so are accurate to within 0.1%. And after about the 90th or 100th term, it's essentially garbage. The purposes of those 110 garbage terms is to make sure that the first 90 nonquitegarbage terms still give us a decent solution. For a 400x400 system, those numbers roughly double (a little less in fact), giving roughly 6570 terms to within 1%, and about 100 terms to within 10%. For a 600x600 system, we reach 1% inaccuracies already by the 80th term, and 10% by about 140 terms. Since the singularities for the slog are logarithmic to first approximation, I'd expect a similar result for the natural logarithm. RE: Applying the iteration methods to simpler functions  bo198214  11/09/2007 jaydfox Wrote:I would not fret if it converges slowly. Me neither, it rather seems very promising, if looking for example at the right side , as if this shall be a hint to the alternating coefficients of However actually to calculate the limit is not that easy. Hopefully one of us on the forum will sometime settle it. RE: Applying the iteration methods to simpler functions  andydude  11/12/2007 You are confusing Bell matrices and Carleman matrices again. The (generalized) Bell matrix of is whereas the Carleman matrix (is all the other stuff is right, though, and interesting. I like your notation, but I'm having trouble following your series notation. I think the example of is an interesting one, and I intend on looking into it, but first for the purposes of illustration, your simplest example is the nicest, for example, addition . Taking the Bell matrix: and subtracting the identity matrix: gives a noninvertible matrix, so doing the choppy thing gives: putting this in the matrix equation gives: and since the matrix is invertible now, we can multiply both sides by its inverse: This gives the natural Abel function (choosing the constant term to be zero) which satisfies the equation . I like that example I know you already talked about it, but I wanted to show a matrixbased way of doing the chopping process. So in general, the matrix equation would give the coefficients of the natural Abel function of f. Andrew Robbins RE: Applying the iteration methods to simpler functions  Gottfried  11/12/2007 andydude Wrote: Hmm, nice coincidence here. Did you notice, that the fractional entries are just the bernoullinumbers, resp binomially weighted bernoullinumbers? It would be more obvious, if your matrix were of bigger size (but I don't know, which iterative function that required). When I was fiddling with the pascalmatrix last year I found the inverse of the shifted pascalmatrix (where before shifting the diagonal is removed) is just a matrix containing the bernoullinumbers as it was found by Faulhaber and J Bernoulli, giving coefficients (with a little modification) of bernoullipolynomials. I found, that it is part of the eigensystem for the *columnsigned* pascalmatrix. Perhaps you like to have a look at my according small treatise at pmatrix Things converge... Gottfried RE: Applying the iteration methods to simpler functions  bo198214  11/12/2007 andydude Wrote:You are confusing Bell matrices and Carleman matrices again.No, I consistently swapped them till now! So for everyone the correct usage: The th row of the Carleman matrix contains the coefficients of the th power of the power series. The th column of the Bell matrix contains the coefficients of the th power of the power series. The Bell and Carleman matrix are transposed to each other. Quote:doing the choppy thing gives: Thanks for this. 