Applying the iteration methods to simpler functions
#1
We basicly have 3 methods to compute \( f^{\circ t} \) for a given analytic function \( f \). This is the natural Abel function, as a generalization of Andrew's slog, then it is the matrix operator method as favoured by Gottfried and there is the old regular iteration method. While the method of natural Abel function works only for developments at non-fixed points, the matrix operator method works for developments on fixed points and on non-fixed points and the regular iteration method works only at fixed points. The matrix operator method and the regular iteration method coincide on fixed points.

Here I want to investigate the application of these methods to the lower operations addition, multiplication and power operation. More precisely what is the outcome of the iteration of \( x+c \), of \( xc \) and of \( x^c \) as supplement to the already much discussed iteration of \( c^x \).

We would expect the methods to yield the following results, lets denote "the" Abel function of \( f \) by \( f^\ast \), which is the inverse of the function \( t\mapsto f^{\circ t}(x_0) \) and is determined up to an additive constant. Vice versa \( f^{\circ t}(x)={f^\ast}^{-1}(f^\ast(x)+t) \).
  1. \( f(x)=x+c \), \( f^{\circ t}(x)=x+ct \), \( f^\ast(s)=\frac{s}{c} \).
  2. \( f(x)=xc \), \( f^{\circ t}(x)=xc^t \), \( f^\ast(s)=\log_c s \).
  3. \( f(x)=x^c \), \( f^{\circ t}(x)=x^{c^t} \), \( f^\ast(s)=\log_c(\ln(s)) \)

Let us start in this post with the investigation of the simplest case 1, \( f(x)=x+c \).
This has no fixed points for \( c\neq 0 \) so we apply the natural Abel function method and the matrix operator method. In both cases we need the Carleman-matrix, which has in its \( n \)-th column the coefficients of the \( n \)-th power of \( f \). The \( n \)th power is \( f(x)^n=(x+c)^n =\sum_{k=0}^n \left(n\\k\right) c^{n-k} x^k \). So the N-truncated Carleman-matrix is \( F_{k,n}=\left(n\\k\right)c^{n-k} \), \( 0\le k,n\le N \).
For example with \( N=3 \):
\( F=\begin{pmatrix} 1 & c & c^2 & c^3\\ 0 & 1 & 2c & 3c^2\\ 0 & 0 & 1 & 3c\\ 0 & 0 & 0 & 1\end{pmatrix} \).

The natural Abel method
Let \( \overline{F} \) be the matrix without the first column and without the last row which has N rows and columns and subtract the Identity matrix, this gives for \( N=3 \):
\( G:=\overline{F}-I=\begin{pmatrix} c & c^2 & c^3\\ 0 & 2c & 3c^2\\ 0 & 0 & 3c\end{pmatrix} \). To retrieve the natural Abel function we have to solve the equation system \( G\ast (f^{\ast}_1,f^{\ast}_2,\dots,f^{\ast}_N)^T = (1,0,\dots)^T \). We can easily backtrace that \( f^{\ast}_2,\dots,f^{\ast}_N=0 \) and that \( f_1^\ast = \frac{1}{c} \). Hence is \( f^\ast(s) = \frac{1}{c}s \) plus \( f^\ast_0 \) and we have verified our claim as this is valid for arbitrarily large N.

The matrix operator method
To compute \( F^t \) we can simply apply here
\( F^t=(F-I+I)^t=\sum_{k=0}^\infty \left(t\\k\right)(F-I)^k \). The first (not 0th) column of \( F^t \) are then the coefficients of \( f^{\circ t} \). But we see that the first column of \( (F-I)^k \) has all entries 0 for \( k\ge 2 \). Hence only two terms in the above infinite sum contributes to the first column, this is \( \left(t\\0\right)(F-I)^0=I \) and \( \left(t\\1\right)(F-I)=t(F-I) \). The first column of the first one is \( (0,1,0,\dots) \) and of the second one is \( t(c,0,\dots) \). Hence \( ({f^{\circ t}}_0,{f^{\circ t}}_1,\dots)=(tc,1,0,\dots) \) and so \( f^{\circ t}(x)=tc+x \) fullfilling our claim too.

Hopefully I will continue somewhen with cases 2. and 3. .
#2
You may also have a look at my operators-article... I discussed the three operations in connection with the matrix-method in few lines at the second last page. See here:http://math.eretrandre.org/tetrationforu...hp?aid=124

hmm. Don't know, seems, there is something broken with the pdf-attachments. Here is an external
link

[update] Ahhh,... and the example with the carleman-matrix is so simple and short, that even I got its idea now. [/update]

Gottfried


Attached Files
.pdf   operators.pdf (Size: 87.4 KB / Downloads: 890)
Gottfried Helms, Kassel
#3
@Henryk

Do you consider parabolic/hyperbolic iteration (and Daniel Geisler's methods) to be part of regular iteration?

Andrew Robbins
#4
Gottfried Wrote:You may also have a look at my operators-article... I discussed the three operations in connection with the matrix-method in few lines at the second last page. See here:http://math.eretrandre.org/tetrationforu...hp?aid=124

hmm. Don't know, seems, there is something broken with the pdf-attachments. Here is an external
link
The link works well. You write in this article:

Quote:The eigensystem of P is degenerate; but it has an exceptional simple matrix-logarithm, by which then a general power can be easily computed when just multiplied with the h-parameter.

There is even a general method that works with every matrix via the Jordan normal form. However it suffices to use a more relaxed form
where the blocks of the Jordan normal form consist of upper triangular matrices with the eigenvalue \( \lambda \) on the diagonal (instead having the eigenvalue on the diagonal and 1 on the diagonal above them, see wikipedia). We can take the power of such a block \( B \) by the formula \( B^t = \lambda^t(\frac{1}{\lambda}B-I+I)^t =\lambda^t \sum_{k=0}^\infty \left(t\\k\right)(\frac{1}{\lambda}B-I)^k \). Which however involves no limits for the entries. And the \( t \)th power of the whole matrix, which consists of several such blocks on the diagonal, is simply the \( t \)th power of each block on the diagonal.

The matrix in the considered case above consisted of just one such block with eigenvalue \( \lambda=1 \).

andydude Wrote:Do you consider parabolic/hyperbolic iteration (and Daniel Geisler's methods) to be part of regular iteration?
Yes I consider both as regular iteration (though if directly handled they require different treatment) but both coincide with the matrix operator method if developed at the fixed point.
#5
Let us continue with the 2nd case \( f(x)=xc, c>0, c\neq 1 \) with the expected iteration \( f^{\circ t}(x)=xc^t \) and the expected Abel function \( f^\ast(s)=\log_c(s) \).

This function has one (and only one) fixed point at 0, so here we can directly apply the regular iteration (which we couldnt for our previous function \( f(x)=x+c \), which had no fixed points) and the matrix operator method. However for the natural Abel function we must consider a development at a non-fixed point.

Iterative regular iteration
We compute the principal Schroeder function by
\( \sigma(x)=\lim_{n\to\infty} \frac{f^{\circ n}(x)}{c^n} = \lim_{n\to\infty} \frac{c^nx}{c^n} = x \).
The principal Abel function is \( \alpha(x)=\log_c(\sigma(x))=\log_c(x) \).

Matrix operator method/formal powerseries iteration
The Bell matrix \( F \) of \( f(x)=xc \) has in the \( i \)-th row the coefficients of the \( i \)-th power of \( f \), i.e. \( f(x)^i=(xc)^i=c^ix^i \), which means it consists of 0's except on the diagonal it has the entries \( 1,c^1, c^2,\dots \).
\( F=\begin{pmatrix} 1 & 0 & 0 & \dots \\ 0 & c & 0 &\dots \\ 0 & 0 & c^2 & 0\dots \\ 0 & \dots\end{pmatrix} \).

This however is already in the diagonal form and the \( t \)-th power of \( F \) consists of \( 1,c^t, c^{2t}, c^{3t},\dots \) on the diagonal and 0 otherwhere. The first row contains the coefficients of the \( t \)-th iterate, so \( f^{\circ t}(x)=c^t x \) according to our assumption.

Natural Abel method
First we have to choose a non-fixed point of development, say \( a \). Then we form transposed conjugate \( g(x)=f(x+a)-a=(x+a)c-a=xc+a(c-1) \), knowing that then \( f^\ast(x+a)=g^\ast(x) \), we would expect that \( g^\ast(x)=\log_c(x-a) \).

For simplification write \( g(x)=xc+d, d=a(c-1) \).
The powers of \( g \) are \( g(x)^n=\sum_{i=0}^n \left(n\\i\right) d^{n-i}c^i x^i \) hence the Carleman matrix G of g has the entries \( G_{i,n}=\left(n\\i\right) d^{n-i}c^i \) this is a upper triangular matrix. Truncated to 4x4:
\( G=\begin{pmatrix}1 & d^1 & d^2 & d^3 \\ 0 & c^1 & 2cd & 3cd^2 \\ 0 & 0 & c^2 & 3c^2d \\ 0 & 0 & 0 & c^3\end{pmatrix} \), subtracting the identity matrix and first column and last row removed, we have to solve:
\( \begin{pmatrix}d^1 & d^2 & d^3 \\ c^1-1 & 2cd & 3cd^2 \\ 0 & c^2-1 & 3c^2d\end{pmatrix}\left(g^\ast_1\\g^\ast_2\\g^\ast_3\right)=\left(1\\0\\0 \right) \)

For a moment let the development point \( a=1 \), then \( d=c-1 \). And we can line by line transform the above equation system into a triangular one, first subtracting the first line from the second line
\( \underbrace{\begin{pmatrix}d^1 & d^2 & d^3 \\ 0 & 2cd-d^2 & 3cd^2-d^3 \\ 0 & c^2-1 & 3c^2d\end{pmatrix}}_{H'}\left(g^\ast_1\\g^\ast_2\\g^\ast_3\right)=\left(1\\-1\\0 \right) \)

then \( H'_{2,2}=2cd-d^2=2c(c-1)-(c-1)^2=c^2-1 \). And we subtract the second line from the third:

\( \underbrace{\begin{pmatrix}d^1 & d^2 & d^3 \\ 0 & 2cd-d^2 & 3cd^2-d^3 \\ 0 & 0 & 3c^2d-(3cd^2-d^3)\end{pmatrix}}_{H''}\left(g^\ast_1\\g^\ast_2\\g^\ast_3\right)=\left(1\\-1\\1 \right) \)

This scheme can arbitrarily continued. We can now subtract the third line from the forth, because \( H''_{3,3}=c^3-1 \) and so on.
Generally this would give us a triangular \( N\times N \) matrix
\( H^{(N-1)}_{k,n} = \sum_{i=0}^{k-1} (-1)^{k+i+1}\left(n\\i\right) d^{n-i} c^i,\quad k\le n,\quad 1\le k,n\le N \).

Hm, does the solution of this system converge to \( \log_c(x-1) \), more specifically is \( \lim_{N\to\infty} g^\ast_k = \frac{-(-1)^k}{k} \) for \( c=e \), does it converge at all? My numerical computations show that it is roughly (up to \( 10^{-3} \) precision) the natural logarithm. However it seems not really to converge, or is it just to slow?
Perhaps one has to derive it from the formulas without numerics in some quiet hours.
#6
I would not fret if it converges slowly.

For example, the slog solution with (E-I)*f=1 converges very slowly, with terms being inaccurate by 10% or more after perhaps the first 1/4th or 1/3rd of the series.

For example, for the 200x200 solution, only the first 75 or so terms are accurate to within 20%, only 60 or so to within 10%. And only the first 35 or so terms are accurate to within 1%. The first 20 or so are accurate to within 0.1%.

And after about the 90th or 100th term, it's essentially garbage. The purposes of those 110 garbage terms is to make sure that the first 90 non-quite-garbage terms still give us a decent solution.

For a 400x400 system, those numbers roughly double (a little less in fact), giving roughly 65-70 terms to within 1%, and about 100 terms to within 10%. For a 600x600 system, we reach 1% inaccuracies already by the 80th term, and 10% by about 140 terms.


Since the singularities for the slog are logarithmic to first approximation, I'd expect a similar result for the natural logarithm.
~ Jay Daniel Fox
#7
jaydfox Wrote:I would not fret if it converges slowly.

Me neither, it rather seems very promising, if looking for example at the right side \( \pm 1 \), as if this shall be a hint to the alternating coefficients of \( \log(x-1) \) Wink
However actually to calculate the limit is not that easy. Hopefully one of us on the forum will sometime settle it.
#8
You are confusing Bell matrices and Carleman matrices again. The (generalized) Bell matrix of \( f(x) = cx+d \) is
\( \left[\begin{tabular}{ccc}
1 & d & d^2 \\
0 & c & 2cd \\
0 & 0 & c^2
\end{tabular}\right] \)
whereas the Carleman matrix (is
\( \left[\begin{tabular}{ccc}
1 & 0 & 0 \\
d & c & 0 \\
d^2 & 2cd & c^2
\end{tabular}\right] \)
all the other stuff is right, though, and interesting. I like your notation, but I'm having trouble following your series notation.

I think the example of \( f(x) = cx+d \) is an interesting one, and I intend on looking into it, but first for the purposes of illustration, your simplest example is the nicest, for example, addition \( f(x) = x+d \). Taking the Bell matrix:
\( \mathbf{B}_x[x + d] = \left[\begin{tabular}{ccc}
1 & d & d^2 & d^3 \\
0 & 1 & 2d & 3d^2 \\
0 & 0 & 1 & 3d \\
0 & 0 & 0 & 1
\end{tabular}\right] \)
and subtracting the identity matrix:
\( \mathbf{B}_x[x + d] - \mathbf{I}= \left[\begin{tabular}{ccc}
0 & d & d^2 & d^3 \\
0 & 0 & 2d & 3d^2 \\
0 & 0 & 0 & 3d \\
0 & 0 & 0 & 0
\end{tabular}\right] \)
gives a non-invertible matrix, so doing the choppy thing gives:
\( \mathbf{C}(\mathbf{B}_x[x + d] - \mathbf{I})\mathbf{D}= \left[\begin{tabular}{ccc}
1 & 0& 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0
\end{tabular}\right]\left[\begin{tabular}{ccc}
0 & d & d^2 & d^3 \\
0 & 0 & 2d & 3d^2 \\
0 & 0 & 0 & 3d \\
0 & 0 & 0 & 0
\end{tabular}\right]\left[\begin{tabular}{ccc}
0 & 0 & 0 \\
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{tabular}\right] = \left[\begin{tabular}{ccc}
d & d^2 & d^3 \\
0 & 2d & 3d^2 \\
0 & 0 & 3d
\end{tabular}\right] \)
putting this in the matrix equation gives:
\( \left[\begin{tabular}{ccc}
d & d^2 & d^3 \\
0 & 2d & 3d^2 \\
0 & 0 & 3d
\end{tabular}\right]\left[\begin{tabular}{c}
g_1 \\
g_2 \\
g_3
\end{tabular}\right] == \left[\begin{tabular}{c}
1 \\
0 \\
0
\end{tabular}\right] \)
and since the matrix is invertible now, we can multiply both sides by its inverse:
\( \left[\begin{tabular}{c}
g_1 \\
g_2 \\
g_3
\end{tabular}\right] == \left[\begin{tabular}{ccc}
\frac{1}{d} & -\frac{1}{2} & \frac{d}{6} \\
0 & \frac{1}{2d} & -\frac{1}{2} \\
0 & 0 & \frac{1}{3d}
\end{tabular}\right]\left[\begin{tabular}{c}
1 \\
0 \\
0
\end{tabular}\right] = \left[\begin{tabular}{c}
1/d \\
0 \\
0
\end{tabular}\right] \)
This gives the natural Abel function \( A_f(x) = f^{*}(x) = x/d \) (choosing the constant term to be zero) which satisfies the equation \( \frac{x+d}{d} = \frac{x}{d} + 1 \).

I like that example Smile I know you already talked about it, but I wanted to show a matrix-based way of doing the chopping process.

So in general, the matrix equation \( (\mathbf{C}(\mathbf{B}[f] - \mathbf{I})\mathbf{D}) \mathbf{a} = \mathbf{b} \) would give the coefficients \( \mathbf{a} \) of the natural Abel function of f.

Andrew Robbins
#9
andydude Wrote:\( \left[\begin{tabular}{c}
g_1 \\
g_2 \\
g_3
\end{tabular}\right] == \left[\begin{tabular}{ccc}
\frac{1}{d} & -\frac{1}{2} & \frac{d}{6} \\
0 & \frac{1}{2d} & -\frac{1}{2} \\
0 & 0 & \frac{1}{3d}
\end{tabular}\right] \)

Hmm, nice coincidence here. Did you notice, that the fractional entries are just the bernoulli-numbers, resp binomially weighted bernoulli-numbers? It would be more obvious, if your matrix were of bigger size (but I don't know, which iterative function that required). When I was fiddling with the pascal-matrix last year I found the inverse of the shifted pascal-matrix (where -before shifting- the diagonal is removed) is just a matrix containing the bernoulli-numbers as it was found by Faulhaber and J Bernoulli, giving coefficients (with a little modification) of bernoulli-polynomials. I found, that it is part of the eigensystem for the *column-signed* pascal-matrix. Perhaps you like to have a look at my according small treatise at p-matrix

Things converge...

Gottfried
Gottfried Helms, Kassel
#10
andydude Wrote:You are confusing Bell matrices and Carleman matrices again.
No, I consistently swapped them till now! Tongue

So for everyone the correct usage: The \( n \)-th row of the Carleman matrix contains the coefficients of the \( n \)-th power of the power series. The \( n \)-th column of the Bell matrix contains the coefficients of the \( n \)-th power of the power series. The Bell and Carleman matrix are transposed to each other.

Quote:doing the choppy thing gives:
\( \mathbf{C}(\mathbf{B}_x[x + d] - \mathbf{I})\mathbf{D}= \left[\begin{tabular}{ccc}
1 & 0& 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0
\end{tabular}\right]\left[\begin{tabular}{ccc}
0 & d & d^2 & d^3 \\
0 & 0 & 2d & 3d^2 \\
0 & 0 & 0 & 3d \\
0 & 0 & 0 & 0
\end{tabular}\right]\left[\begin{tabular}{ccc}
0 & 0 & 0 \\
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{tabular}\right] = \left[\begin{tabular}{ccc}
d & d^2 & d^3 \\
0 & 2d & 3d^2 \\
0 & 0 & 3d
\end{tabular}\right] \)
...
I wanted to show a matrix-based way of doing the chopping process.

Thanks for this.


Possibly Related Threads…
Thread Author Replies Views Last Post
  4 hypothesis about iterated functions Shanghai46 11 2,879 04/22/2023, 08:22 PM
Last Post: Shanghai46
  Question about the properties of iterated functions Shanghai46 9 2,239 04/21/2023, 09:07 PM
Last Post: Shanghai46
  Computing sqrt 2 with rational functions. tommy1729 0 452 03/31/2023, 11:49 AM
Last Post: tommy1729
  numerical methods with triple exp convergeance ? tommy1729 1 540 03/27/2023, 03:39 AM
Last Post: JmsNxn
  [NT] Caleb stuff , mick's MSE and tommy's diary functions tommy1729 0 465 02/26/2023, 08:37 PM
Last Post: tommy1729
  Evaluating Arithmetic Functions In The Complex Plane Caleb 6 1,793 02/20/2023, 12:16 AM
Last Post: tommy1729
  Axiomizing different methods Daniel 0 653 09/29/2022, 10:01 AM
Last Post: Daniel
  Bessel functions and the iteration of \(e^z -1 \) JmsNxn 8 2,609 09/09/2022, 02:37 AM
Last Post: tommy1729
  The iterational paradise of fractional linear functions bo198214 7 2,720 08/07/2022, 04:41 PM
Last Post: bo198214
  Uniqueness of fractionally iterated functions Daniel 7 3,231 07/05/2022, 01:21 AM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)