Ok, I understand now. I just did the same thing with Aldrovandi's diagonalization method. Aldrovandi and others have shown that when you diagonalize the Koch/Bell/Carleman matrix of a function \( M[f] = M[\sigma_f^{-1}] \cdot D \cdot M[\sigma_f] \), then the diagonal matrix contains the eigenvalues, i.e. the powers of \( f_1 = f'(0) \), and the diagonalizing matrix is the inverse of the Koch/Bell/Carleman matrix of the Schroeder function. So this got me thinking if the eigensystem decomposition (or matrix diagonalization) produces a regular Schroeder function.
It does, but of course eigenvectors are only unique up to scaling, so I suppose you could think of it as a question of convention rather than uniqueness. You could, of course, find an eigensystem decomposition such that the diagonalization matrix was the Koch/Bell/Carleman matrix of the principal Schroeder function, but I was wondering how to do so. Turning to Mathematica, I found the nice function Eigensystem[] for breaking down a matrix into its eigenvalues and eigenvectors, so I though I'd give it a try.
The eigenvectors returned by Eigensystem[] form the columns of the diagonalizing matrix in \( PDP^{-1} \) but the Schroeder function matrix satisfies \( S^{-1}DS \), so even though it returns:
\(
\left[
\begin{tabular}{cccc}
1&0&0&0 \\
0&1& \frac{f_2}{(f_1-1)f_1} & \frac{2f_2^2+(f_1-1)f_1f_3}{(f_1-1)^2f_1^2(f_1+1)} \\
0&0&1& \frac{2f_2}{(f_1-1)f_1} \\
0&0&0&1
\end{tabular}
\right]
\)
this is actually P which means S is actually:
\(
\left[
\begin{tabular}{cccc}
1&0&0&0 \\
0&1& \frac{f_2}{(1-f_1)f_1} & \frac{2f_2^2-(f_1-1)f_3}{(f_1-1)^2f_1(f_1+1)} \\
0&0&1& \frac{2f_2}{(1-f_1)f_1} \\
0&0&0&1
\end{tabular}
\right]
\)
using a degree 3 approximation to the infinite matrices. So using the property that the series coefficients are just the 1st row (as opposed to the 0th row), which means the corresponding Schroeder function is:
\( \sigma_f(x) = x + x^2 \frac{f_2}{(1-f_1)f_1} + x^3 \frac{2f_2^2-(f_1-1)f_3}{(f_1-1)^2f_1(f_1+1)} + \cdots \)
where f is of the form \( f(x) = \sum_{k=1}^{\infty}f_kx^k \). Now what I find interesting is that the definition of the principal Schroeder function states that \( \sigma_f(x) = \lim_{n\rightarrow\infty}\frac{f^{\circ n}(x)}{f_1^n} \) which implies:
\(
\begin{tabular}{rl}
\frac{f^{\circ n}(x)}{f_1^n}
& = x + x^2\frac{f_2}{f_1}\sum_{k=0}^{n-1} f_1^k + \cdots\\
& = x + x^2\frac{f_2 (1-f_1^n)}{f_1 (1-f_1)} + \cdots
\end{tabular}
\)
and in the limit (assuming \( |f_1|<1 \)):
\( \sigma_f(x) = x + x^2 \frac{f_2}{(1-f_1)f_1} + \cdots \)
which is the same Schroeder function the Eigensystem[] function returned. From this, I can clearly see that the limit definition of the Schroeder function actually makes sense, because it didn't make sense to me before now. Fortunately, my understanding of Koch/Bell/Carleman matrices allowed me to find another way to get the the same function.
While I was doing this I noticed something very interesting. We know the relationship between the Abel and Schroeder function is \( \sigma_f(x) = f_1^{\alpha_f(x)} \), which means the inverse relationship is:
\(
\begin{tabular}{rl}
\sigma_f(x) & = (f_1)^{\alpha_f(x)} \\
\sigma_f(\alpha_f^{-1}(x)) & = (f_1)^{x} \\
\alpha_f^{-1}(x) & = \sigma_f^{-1}\left((f_1)^{x}\right) \\
\end{tabular}
\)
and replacing f with \( DE_h(x) = h^x-1 \), we get
\(
\begin{tabular}{rl}
\alpha_{DE}^{-1}(x)
& = \sigma_{DE}^{-1}\left(\ln(h)^{x}\right) \\
& = \ln(h)^x + (\ln(h)^x)^2 \frac{\ln(h)}{2(\ln(h)-1)} + (\ln(h)^x)^3 \frac{\ln(h)^2(\ln(h)+2)}{6(\ln(h)-1)^2(\ln(h)+1)} + \cdots \\
& = e^{x\ln(\ln(h))} + e^{2x\ln(\ln(h))} \frac{\ln(h)}{2(\ln(h)-1)} + e^{3x\ln(\ln(h))} \frac{\ln(h)^2(\ln(h)+2)}{6(\ln(h)-1)^2(\ln(h)+1)} + \cdots
\end{tabular}
\)
because \( DE'(0) = \ln(h) \), and because the matrix P represents the inverse Schroeder function. What I find interesting about this is that it is almost a Fourier expansion of the exponential of iteration of DE, and that it is almost easier to compute than the Schroeder function, since you don't even need to invert P!
Andrew Robbins
It does, but of course eigenvectors are only unique up to scaling, so I suppose you could think of it as a question of convention rather than uniqueness. You could, of course, find an eigensystem decomposition such that the diagonalization matrix was the Koch/Bell/Carleman matrix of the principal Schroeder function, but I was wondering how to do so. Turning to Mathematica, I found the nice function Eigensystem[] for breaking down a matrix into its eigenvalues and eigenvectors, so I though I'd give it a try.
The eigenvectors returned by Eigensystem[] form the columns of the diagonalizing matrix in \( PDP^{-1} \) but the Schroeder function matrix satisfies \( S^{-1}DS \), so even though it returns:
\(
\left[
\begin{tabular}{cccc}
1&0&0&0 \\
0&1& \frac{f_2}{(f_1-1)f_1} & \frac{2f_2^2+(f_1-1)f_1f_3}{(f_1-1)^2f_1^2(f_1+1)} \\
0&0&1& \frac{2f_2}{(f_1-1)f_1} \\
0&0&0&1
\end{tabular}
\right]
\)
this is actually P which means S is actually:
\(
\left[
\begin{tabular}{cccc}
1&0&0&0 \\
0&1& \frac{f_2}{(1-f_1)f_1} & \frac{2f_2^2-(f_1-1)f_3}{(f_1-1)^2f_1(f_1+1)} \\
0&0&1& \frac{2f_2}{(1-f_1)f_1} \\
0&0&0&1
\end{tabular}
\right]
\)
using a degree 3 approximation to the infinite matrices. So using the property that the series coefficients are just the 1st row (as opposed to the 0th row), which means the corresponding Schroeder function is:
\( \sigma_f(x) = x + x^2 \frac{f_2}{(1-f_1)f_1} + x^3 \frac{2f_2^2-(f_1-1)f_3}{(f_1-1)^2f_1(f_1+1)} + \cdots \)
where f is of the form \( f(x) = \sum_{k=1}^{\infty}f_kx^k \). Now what I find interesting is that the definition of the principal Schroeder function states that \( \sigma_f(x) = \lim_{n\rightarrow\infty}\frac{f^{\circ n}(x)}{f_1^n} \) which implies:
\(
\begin{tabular}{rl}
\frac{f^{\circ n}(x)}{f_1^n}
& = x + x^2\frac{f_2}{f_1}\sum_{k=0}^{n-1} f_1^k + \cdots\\
& = x + x^2\frac{f_2 (1-f_1^n)}{f_1 (1-f_1)} + \cdots
\end{tabular}
\)
and in the limit (assuming \( |f_1|<1 \)):
\( \sigma_f(x) = x + x^2 \frac{f_2}{(1-f_1)f_1} + \cdots \)
which is the same Schroeder function the Eigensystem[] function returned. From this, I can clearly see that the limit definition of the Schroeder function actually makes sense, because it didn't make sense to me before now. Fortunately, my understanding of Koch/Bell/Carleman matrices allowed me to find another way to get the the same function.
While I was doing this I noticed something very interesting. We know the relationship between the Abel and Schroeder function is \( \sigma_f(x) = f_1^{\alpha_f(x)} \), which means the inverse relationship is:
\(
\begin{tabular}{rl}
\sigma_f(x) & = (f_1)^{\alpha_f(x)} \\
\sigma_f(\alpha_f^{-1}(x)) & = (f_1)^{x} \\
\alpha_f^{-1}(x) & = \sigma_f^{-1}\left((f_1)^{x}\right) \\
\end{tabular}
\)
and replacing f with \( DE_h(x) = h^x-1 \), we get
\(
\begin{tabular}{rl}
\alpha_{DE}^{-1}(x)
& = \sigma_{DE}^{-1}\left(\ln(h)^{x}\right) \\
& = \ln(h)^x + (\ln(h)^x)^2 \frac{\ln(h)}{2(\ln(h)-1)} + (\ln(h)^x)^3 \frac{\ln(h)^2(\ln(h)+2)}{6(\ln(h)-1)^2(\ln(h)+1)} + \cdots \\
& = e^{x\ln(\ln(h))} + e^{2x\ln(\ln(h))} \frac{\ln(h)}{2(\ln(h)-1)} + e^{3x\ln(\ln(h))} \frac{\ln(h)^2(\ln(h)+2)}{6(\ln(h)-1)^2(\ln(h)+1)} + \cdots
\end{tabular}
\)
because \( DE'(0) = \ln(h) \), and because the matrix P represents the inverse Schroeder function. What I find interesting about this is that it is almost a Fourier expansion of the exponential of iteration of DE, and that it is almost easier to compute than the Schroeder function, since you don't even need to invert P!
Andrew Robbins