Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Applying the iteration methods to simpler functions
We basicly have 3 methods to compute for a given analytic function . This is the natural Abel function, as a generalization of Andrew's slog, then it is the matrix operator method as favoured by Gottfried and there is the old regular iteration method. While the method of natural Abel function works only for developments at non-fixed points, the matrix operator method works for developments on fixed points and on non-fixed points and the regular iteration method works only at fixed points. The matrix operator method and the regular iteration method coincide on fixed points.

Here I want to investigate the application of these methods to the lower operations addition, multiplication and power operation. More precisely what is the outcome of the iteration of , of and of as supplement to the already much discussed iteration of .

We would expect the methods to yield the following results, lets denote "the" Abel function of by , which is the inverse of the function and is determined up to an additive constant. Vice versa .
  1. , , .
  2. , , .
  3. , ,

Let us start in this post with the investigation of the simplest case 1, .
This has no fixed points for so we apply the natural Abel function method and the matrix operator method. In both cases we need the Carleman-matrix, which has in its -th column the coefficients of the -th power of . The th power is . So the N-truncated Carleman-matrix is , .
For example with :

The natural Abel method
Let be the matrix without the first column and without the last row which has N rows and columns and subtract the Identity matrix, this gives for :
. To retrieve the natural Abel function we have to solve the equation system . We can easily backtrace that and that . Hence is plus and we have verified our claim as this is valid for arbitrarily large N.

The matrix operator method
To compute we can simply apply here
. The first (not 0th) column of are then the coefficients of . But we see that the first column of has all entries 0 for . Hence only two terms in the above infinite sum contributes to the first column, this is and . The first column of the first one is and of the second one is . Hence and so fullfilling our claim too.

Hopefully I will continue somewhen with cases 2. and 3. .
You may also have a look at my operators-article... I discussed the three operations in connection with the matrix-method in few lines at the second last page. See here:

hmm. Don't know, seems, there is something broken with the pdf-attachments. Here is an external

[update] Ahhh,... and the example with the carleman-matrix is so simple and short, that even I got its idea now. [/update]


Attached Files
.pdf   operators.pdf (Size: 87.4 KB / Downloads: 411)
Gottfried Helms, Kassel

Do you consider parabolic/hyperbolic iteration (and Daniel Geisler's methods) to be part of regular iteration?

Andrew Robbins
Gottfried Wrote:You may also have a look at my operators-article... I discussed the three operations in connection with the matrix-method in few lines at the second last page. See here:

hmm. Don't know, seems, there is something broken with the pdf-attachments. Here is an external
The link works well. You write in this article:

Quote:The eigensystem of P is degenerate; but it has an exceptional simple matrix-logarithm, by which then a general power can be easily computed when just multiplied with the h-parameter.

There is even a general method that works with every matrix via the Jordan normal form. However it suffices to use a more relaxed form
where the blocks of the Jordan normal form consist of upper triangular matrices with the eigenvalue on the diagonal (instead having the eigenvalue on the diagonal and 1 on the diagonal above them, see wikipedia). We can take the power of such a block by the formula . Which however involves no limits for the entries. And the th power of the whole matrix, which consists of several such blocks on the diagonal, is simply the th power of each block on the diagonal.

The matrix in the considered case above consisted of just one such block with eigenvalue .

andydude Wrote:Do you consider parabolic/hyperbolic iteration (and Daniel Geisler's methods) to be part of regular iteration?
Yes I consider both as regular iteration (though if directly handled they require different treatment) but both coincide with the matrix operator method if developed at the fixed point.
Let us continue with the 2nd case with the expected iteration and the expected Abel function .

This function has one (and only one) fixed point at 0, so here we can directly apply the regular iteration (which we couldnt for our previous function , which had no fixed points) and the matrix operator method. However for the natural Abel function we must consider a development at a non-fixed point.

Iterative regular iteration
We compute the principal Schroeder function by
The principal Abel function is .

Matrix operator method/formal powerseries iteration
The Bell matrix of has in the -th row the coefficients of the -th power of , i.e. , which means it consists of 0's except on the diagonal it has the entries .

This however is already in the diagonal form and the -th power of consists of on the diagonal and 0 otherwhere. The first row contains the coefficients of the -th iterate, so according to our assumption.

Natural Abel method
First we have to choose a non-fixed point of development, say . Then we form transposed conjugate , knowing that then , we would expect that .

For simplification write .
The powers of are hence the Carleman matrix G of g has the entries this is a upper triangular matrix. Truncated to 4x4:
, subtracting the identity matrix and first column and last row removed, we have to solve:

For a moment let the development point , then . And we can line by line transform the above equation system into a triangular one, first subtracting the first line from the second line

then . And we subtract the second line from the third:

This scheme can arbitrarily continued. We can now subtract the third line from the forth, because and so on.
Generally this would give us a triangular matrix

Hm, does the solution of this system converge to , more specifically is for , does it converge at all? My numerical computations show that it is roughly (up to precision) the natural logarithm. However it seems not really to converge, or is it just to slow?
Perhaps one has to derive it from the formulas without numerics in some quiet hours.
I would not fret if it converges slowly.

For example, the slog solution with (E-I)*f=1 converges very slowly, with terms being inaccurate by 10% or more after perhaps the first 1/4th or 1/3rd of the series.

For example, for the 200x200 solution, only the first 75 or so terms are accurate to within 20%, only 60 or so to within 10%. And only the first 35 or so terms are accurate to within 1%. The first 20 or so are accurate to within 0.1%.

And after about the 90th or 100th term, it's essentially garbage. The purposes of those 110 garbage terms is to make sure that the first 90 non-quite-garbage terms still give us a decent solution.

For a 400x400 system, those numbers roughly double (a little less in fact), giving roughly 65-70 terms to within 1%, and about 100 terms to within 10%. For a 600x600 system, we reach 1% inaccuracies already by the 80th term, and 10% by about 140 terms.

Since the singularities for the slog are logarithmic to first approximation, I'd expect a similar result for the natural logarithm.
~ Jay Daniel Fox
jaydfox Wrote:I would not fret if it converges slowly.

Me neither, it rather seems very promising, if looking for example at the right side , as if this shall be a hint to the alternating coefficients of Wink
However actually to calculate the limit is not that easy. Hopefully one of us on the forum will sometime settle it.
You are confusing Bell matrices and Carleman matrices again. The (generalized) Bell matrix of is

whereas the Carleman matrix (is

all the other stuff is right, though, and interesting. I like your notation, but I'm having trouble following your series notation.

I think the example of is an interesting one, and I intend on looking into it, but first for the purposes of illustration, your simplest example is the nicest, for example, addition . Taking the Bell matrix:

and subtracting the identity matrix:

gives a non-invertible matrix, so doing the choppy thing gives:

putting this in the matrix equation gives:

and since the matrix is invertible now, we can multiply both sides by its inverse:

This gives the natural Abel function (choosing the constant term to be zero) which satisfies the equation .

I like that example Smile I know you already talked about it, but I wanted to show a matrix-based way of doing the chopping process.

So in general, the matrix equation would give the coefficients of the natural Abel function of f.

Andrew Robbins
andydude Wrote:

Hmm, nice coincidence here. Did you notice, that the fractional entries are just the bernoulli-numbers, resp binomially weighted bernoulli-numbers? It would be more obvious, if your matrix were of bigger size (but I don't know, which iterative function that required). When I was fiddling with the pascal-matrix last year I found the inverse of the shifted pascal-matrix (where -before shifting- the diagonal is removed) is just a matrix containing the bernoulli-numbers as it was found by Faulhaber and J Bernoulli, giving coefficients (with a little modification) of bernoulli-polynomials. I found, that it is part of the eigensystem for the *column-signed* pascal-matrix. Perhaps you like to have a look at my according small treatise at p-matrix

Things converge...

Gottfried Helms, Kassel
andydude Wrote:You are confusing Bell matrices and Carleman matrices again.
No, I consistently swapped them till now! Tongue

So for everyone the correct usage: The -th row of the Carleman matrix contains the coefficients of the -th power of the power series. The -th column of the Bell matrix contains the coefficients of the -th power of the power series. The Bell and Carleman matrix are transposed to each other.

Quote:doing the choppy thing gives:

I wanted to show a matrix-based way of doing the chopping process.

Thanks for this.

Possibly Related Threads...
Thread Author Replies Views Last Post
  The AB functions ! tommy1729 0 1,793 04/04/2017, 11:00 PM
Last Post: tommy1729
  the inverse ackerman functions JmsNxn 3 6,453 09/18/2016, 11:02 AM
Last Post: Xorter
  Look-alike functions. tommy1729 1 2,286 03/08/2016, 07:10 PM
Last Post: hixidom
  Inverse power tower functions tommy1729 0 2,073 01/04/2016, 12:03 PM
Last Post: tommy1729
  [2014] composition of 3 functions. tommy1729 0 1,962 08/25/2014, 12:08 AM
Last Post: tommy1729
  How many methods have this property ? tommy1729 1 2,805 05/22/2014, 04:56 PM
Last Post: sheldonison
  Intresting functions not ? tommy1729 4 5,602 03/05/2014, 06:49 PM
Last Post: razrushil
  [Update] Comparision of 5 methods of interpolation to continuous tetration Gottfried 30 32,394 02/04/2014, 12:31 AM
Last Post: Gottfried
  applying continuum sum to interpolate any sequence. JmsNxn 1 3,139 08/18/2013, 08:55 PM
Last Post: tommy1729
  generalizing the problem of fractional analytic Ackermann functions JmsNxn 17 24,154 11/24/2011, 01:18 AM
Last Post: JmsNxn

Users browsing this thread: 1 Guest(s)