regular slog
#1
Let us determine the regular super logarithm \( \text{rslog}_b \) of \( b^x \), \( 1<b<\eta \) at the lower fixed point \( a \). Regular super logarithm shall mean that it satisfies
(1) \( \text{rslog}_b(1)=0 \)
(2) \( \text{rslog}_b(b^x)=\text{rslog}_b(x)+1 \)
and that
(3) \( \text{rslog}_b^{-1}(\text{rslog}_b(x)+t)=\exp_b^{\circ t}(x) \) where the right side is the regular iteration of \( \exp_b \) at the fixed point \( a \).

Then the formula for the principal Abel function is:

\( \alpha_b(x)=\lim_{n\to\infty} \log_{\ln(a)}(a-\exp_b^{\circ n}(x))-n \)

and that for the regular super logarithm:

\( \text{rslog}_b(x)= \alpha_b(x) - \alpha_b(1) \)

Graph of \( \text{rslog}_{\sqrt{2}} \):

   

Proof:

For doing this we first compute the regular Schroeder function (note that the Schroeder function is determined up to a multiplicative constant and the Abel function is determined up to an additive constant). A Schroeder function \( \sigma \) of a function \( f \) is a function that satisfies the Schroeder equation

\( \sigma(f(x))=s\sigma(x) \)

We see that we can derive a solution \( \alpha \) of the Abel equation

\( \alpha(f(x))=\alpha(x)+1 \)

by setting \( \alpha(x)=\log_s(\sigma(x)) \).

Now there is the the so called principal Schroeder function \( \sigma_f \) of a function \( f \) with fixed point 0 with slope \( s:=f'(0) \), \( 0<s<1 \) given by:

\( \sigma_f(x) = \lim_{n\to\infty} \frac{f^{\circ n}(x)}{s^n} \)

This function particularly yields the regular iteration at 0, via \( f^{\circ t}(x)=\sigma^{-1}(s^t\sigma(x)) \).

To determine the Schroeder equation at the lower fixed point \( a \) of \( \exp_b \) we consider
\( f(x)=a-b^{a-x} \) with fixed point 0 and same slope \( s=\exp_b'(a)=\ln(b)exp_b(a)=\ln(b)\log_b(a)=\ln(a)<1 \). Let \( \rho(x)=a-x \) then \( f=\rho\circ\exp_b\circ\rho=\rho^{-1}\circ\exp_b\circ\rho \).

\( f^{\circ t}(x)=\sigma_f^{-1}(s^t\sigma_f(x)) \).
\( \exp_b^{\circ t}(x)=(\rho\circ f \circ \rho^{-1})^{\circ t}=\rho\circ f^{\circ t}\circ \rho^{-1}=\rho\circ \sigma_f^{-1}\circ \mu_{s^t}\circ \sigma_f\circ \rho^{-1} \).

Hence \( \sigma_f\circ\rho^{-1} \) is the principial Schroeder function of \( \exp_b \) at \( a \).

To get the principal Abel function we take the logarithm to base \( s \):
\( \sigma_f\circ\rho^{-1}(x)=\sigma_f(a-x)=\lim_{n\to\infty} \frac{f^{\circ t}(a-x)}{s^n})=\lim_{n\to\infty} \frac{a-\exp_b^{\circ n}(a-(a-x))}{s^n}=\lim_{n\to\infty} \frac{a-\exp_b^{\circ n}(x)}{s^n} \)
\( \alpha_b(x)=\log_s(\sigma_f\circ\rho^{-1}(x))=\lim_{n\to\infty}\log_s(a-\exp_b(x))-n \).
Reply
#2
For repelling fixed points, we can compute the Abel function of the inverse function which has attracting fixed points:
\( \beta(f^{-1}(x))=\beta(x)+1 \)
and then is clear, that \( \alpha(x)=-\beta(x) \) is an Abel function of the original function \( f \), because
\( -\beta(f(x))=-\beta(x)+1 \) iff \( \beta(x)=\beta(f(x))+1 \) iff \( \beta(f^{-1}(y))=\beta(y)+1 \).

So for repelling fixed points \( a \) we get the formula:

\( \alpha_{b,a}(x)=\lim_{n\to\infty} n-\log_{1/\log(a)}(a-\log_b^{\circ n}(x)) \)

which works for arbitrary repelling complex fixed points and arbitrary \( b>1 \) as long as we chose a branch of the involved logarithm such that \( \log_b^{\circ n}\to a \).

For computing the regular super logarithm however we face a major problem with repelling fixed points: we can not compute directly \( \alpha_{b,a}({^nb}) \), as \( \log_b^{\circ n+2}({^nb})=-\infty \). This presents a problem because for the rslog we have to compute \( \alpha_{b,a}(1) \) to normalize the values.
The good news however is that \( \lim_{x\to {^nb}} \alpha_{b,a}(x) \) seems to always exists. So the regular super logarithm is then:

\( \text{rslog}_{b,a}(x)=\alpha_{b,a}(x)-\lim_{\xi\to 1}\alpha_{b,a}(\xi) \) for \( x\neq {^nb},n\in\mathbb{N}_0 \) and
\( \text{rslog}_{b,a}(x)=\lim_{\xi\to x}\alpha_{b,a}(x)-\lim_{\xi\to 1}\alpha_{b,a}(\xi) \) otherwise.

Following the idea of Jay to add the regular iteration at conjugate fixed points (and my idea to divide by 2 to get an Abel function again) let us consider

\( \alpha_{b,a}^\ast(x)=\frac{\alpha_{b,a}(x)+\alpha_{b,\overline{a}}(x)}{2} \)

where \( a \) is a fixed point in the upper halfplane.

Proposition: \( \alpha_{b,\overline{a}}(x)=\overline{\alpha_{b,a}(x)} \) for \( x\in\mathbb{R} \), \( x\neq {^nb} \). Particularly this implies that \( \alpha_{b,a}^\ast(x)=\Re(\alpha_{b,a}(x))=\Re(\alpha_{b,\overline{a}}(x)) \)

Note, that we define \( \alpha_{b,a}^\ast \) merely on the real axis, because this is the intersection of the domain of definition of \( \alpha_{b,a} \) (upper halfplane) and \( \alpha_{b,\overline{a} \) (lower halfplane).

Proof:

The first question that appears is: Which branch of the logarithm converges to \( \overline{a} \). While the usual logarithm is defined to yield imaginary values \( -\pi<y\le\pi \), for the lower primary fixed point we use the logarithm that yields imaginary values \( -\pi\le y<\pi \) denote this by \( \log^\ast \). For the non-primary fixed points \( \log(z)+2\pi i k \) is appropriate for the \( k+1 \)th upper fixed point and \( \log^\ast(z)-2\pi i k \) is appropriate for the conjugated fixed point (though \( \log(z)-2\pi i k \) is also ok, for \( k>0 \)).


We first verify that \( \log(\overline{z})=\overline{\log^\ast(z)} \) and hence \( \log(\overline{z})+2\pi i k=\overline{\log^\ast(z)-2\pi i k} \).
\( \begin{align*}
\log(\overline{z})&=\log(\overline{x+iy})=\ln(x-iy)\\
&=\log(r(\cos(\varphi)-i\sin(\varphi))) & \text{let} -\pi< -\varphi\le\pi\\
&=\ln( r)+\log(\cos(-\varphi)+\sin(-\varphi)) & -\pi\le \varphi<\pi\\
&=\ln( r)+\log(e^{-i\varphi})=\ln( r)-i\varphi=\overline{\ln( r)+i\varphi}=\overline{\log^\ast(x+iy)}=\overline{\log^\ast(z)}\end{align*} \).

A further consequence is that \( \log_{\overline{c}}(\overline{z})=\overline{\log_c(z)} \).

The rest is then easily established, let \( a \) be the \( k+1 \)th fixed point in the upper half plane:
\( \alpha_{b,\overline{a}}(x)=\lim_{n\to\infty} n-\log_{1/\log(\overline{a})}(\overline{a}-\left(\log^\ast_b-\frac{2\pi i k}{\ln(b)}\right)^{\circ n}(x))=\overline{\lim_{n\to\infty} n-\log_{1/\log(a)}\left(\log_b+\frac{2\pi i k}{\ln(b)}\right)^{\circ n}(x))}=\overline{\alpha_{b,a}(x)} \).
Reply
#3
Forgive me for being slow, but have you shown that this satisfies Szekeres' definition of regularity? and if so, where have you shown this?

Andrew Robbins
Reply
#4
andydude Wrote:Forgive me for being slow, but have you shown that this satisfies Szekeres' definition of regularity? and if so, where have you shown this?

In [1] Szekeres defines \( f(x) \) being regular (in the case \( 0<f'(0)=a<1 \)) if it has a family of Schroeder iterates \( f^{\circ t}(x)=\sigma^{-1}(a^t\sigma(x)) \) (where \( \sigma \) is a Schroeder function) such that
\( \lim_{x\downarrow 0} \frac{f^{\circ t}(x)}{x}=a^t \)

Such a family of Schroeder iterates is then unique (and we usually call it the regular iterates at fixed point 0).
Szekeres shows in [1] that \( f \) is regular if the principal Schroeder function
\( \sigma(x)=\lim_{n\to\infty} \frac{f^{\circ n}(x)}{a^n} \)
is used for the Schroeder iterates. (Given that it exists, and satisfies some further conditions as strict monotony, differentiability etc.)

In the case of analytic functions with asymptotic development at 0, the formal iterates are the regular iterates.

[1] Szekeres: Regular iteration of real and complex functions, 1958.
Reply
#5
andydude Wrote:Forgive me for being slow, but have you shown that this satisfies Szekeres' definition of regularity? and if so, where have you shown this?

What function/iterates do you mean?
Reply
#6
Ok, I understand now. I just did the same thing with Aldrovandi's diagonalization method. Aldrovandi and others have shown that when you diagonalize the Koch/Bell/Carleman matrix of a function \( M[f] = M[\sigma_f^{-1}] \cdot D \cdot M[\sigma_f] \), then the diagonal matrix contains the eigenvalues, i.e. the powers of \( f_1 = f'(0) \), and the diagonalizing matrix is the inverse of the Koch/Bell/Carleman matrix of the Schroeder function. So this got me thinking if the eigensystem decomposition (or matrix diagonalization) produces a regular Schroeder function.

It does, but of course eigenvectors are only unique up to scaling, so I suppose you could think of it as a question of convention rather than uniqueness. You could, of course, find an eigensystem decomposition such that the diagonalization matrix was the Koch/Bell/Carleman matrix of the principal Schroeder function, but I was wondering how to do so. Turning to Mathematica, I found the nice function Eigensystem[] for breaking down a matrix into its eigenvalues and eigenvectors, so I though I'd give it a try.

The eigenvectors returned by Eigensystem[] form the columns of the diagonalizing matrix in \( PDP^{-1} \) but the Schroeder function matrix satisfies \( S^{-1}DS \), so even though it returns:


\(
\left[
\begin{tabular}{cccc}
1&0&0&0 \\
0&1& \frac{f_2}{(f_1-1)f_1} & \frac{2f_2^2+(f_1-1)f_1f_3}{(f_1-1)^2f_1^2(f_1+1)} \\
0&0&1& \frac{2f_2}{(f_1-1)f_1} \\
0&0&0&1
\end{tabular}
\right]
\)


this is actually P which means S is actually:


\(
\left[
\begin{tabular}{cccc}
1&0&0&0 \\
0&1& \frac{f_2}{(1-f_1)f_1} & \frac{2f_2^2-(f_1-1)f_3}{(f_1-1)^2f_1(f_1+1)} \\
0&0&1& \frac{2f_2}{(1-f_1)f_1} \\
0&0&0&1
\end{tabular}
\right]
\)


using a degree 3 approximation to the infinite matrices. So using the property that the series coefficients are just the 1st row (as opposed to the 0th row), which means the corresponding Schroeder function is:

\( \sigma_f(x) = x + x^2 \frac{f_2}{(1-f_1)f_1} + x^3 \frac{2f_2^2-(f_1-1)f_3}{(f_1-1)^2f_1(f_1+1)} + \cdots \)

where f is of the form \( f(x) = \sum_{k=1}^{\infty}f_kx^k \). Now what I find interesting is that the definition of the principal Schroeder function states that \( \sigma_f(x) = \lim_{n\rightarrow\infty}\frac{f^{\circ n}(x)}{f_1^n} \) which implies:

\(
\begin{tabular}{rl}
\frac{f^{\circ n}(x)}{f_1^n}
& = x + x^2\frac{f_2}{f_1}\sum_{k=0}^{n-1} f_1^k + \cdots\\
& = x + x^2\frac{f_2 (1-f_1^n)}{f_1 (1-f_1)} + \cdots
\end{tabular}
\)

and in the limit (assuming \( |f_1|<1 \)):

\( \sigma_f(x) = x + x^2 \frac{f_2}{(1-f_1)f_1} + \cdots \)

which is the same Schroeder function the Eigensystem[] function returned. From this, I can clearly see that the limit definition of the Schroeder function actually makes sense, because it didn't make sense to me before now. Fortunately, my understanding of Koch/Bell/Carleman matrices allowed me to find another way to get the the same function.


While I was doing this I noticed something very interesting. We know the relationship between the Abel and Schroeder function is \( \sigma_f(x) = f_1^{\alpha_f(x)} \), which means the inverse relationship is:
\(
\begin{tabular}{rl}
\sigma_f(x) & = (f_1)^{\alpha_f(x)} \\
\sigma_f(\alpha_f^{-1}(x)) & = (f_1)^{x} \\
\alpha_f^{-1}(x) & = \sigma_f^{-1}\left((f_1)^{x}\right) \\
\end{tabular}
\)
and replacing f with \( DE_h(x) = h^x-1 \), we get

\(
\begin{tabular}{rl}
\alpha_{DE}^{-1}(x)
& = \sigma_{DE}^{-1}\left(\ln(h)^{x}\right) \\
& = \ln(h)^x + (\ln(h)^x)^2 \frac{\ln(h)}{2(\ln(h)-1)} + (\ln(h)^x)^3 \frac{\ln(h)^2(\ln(h)+2)}{6(\ln(h)-1)^2(\ln(h)+1)} + \cdots \\
& = e^{x\ln(\ln(h))} + e^{2x\ln(\ln(h))} \frac{\ln(h)}{2(\ln(h)-1)} + e^{3x\ln(\ln(h))} \frac{\ln(h)^2(\ln(h)+2)}{6(\ln(h)-1)^2(\ln(h)+1)} + \cdots
\end{tabular}
\)

because \( DE'(0) = \ln(h) \), and because the matrix P represents the inverse Schroeder function. What I find interesting about this is that it is almost a Fourier expansion of the exponential of iteration of DE, and that it is almost easier to compute than the Schroeder function, since you don't even need to invert P!

Andrew Robbins
Reply
#7
andydude Wrote:Ok, I understand now. I just did the same thing with Aldrovandi's diagonalization method. Aldrovandi and others have shown that when you diagonalize the Koch/Bell/Carleman matrix of a function \( M[f] = M[\sigma_f^{-1}] \cdot D \cdot M[\sigma_f] \), then the diagonal matrix contains the eigenvalues, i.e. the powers of \( f_1 = f'(0) \), and the diagonalizing matrix is the inverse of the Koch/Bell/Carleman matrix of the Schroeder function.So this got me thinking if the eigensystem decomposition (or matrix diagonalization) produces a regular Schroeder function.

Yes this method is Gottfried's method, I somewhere already posted that in the case of hyperbolic iteration (power series developed at a fixed point), Gottfried's method gives the formal power series iteration, which is the regular iteration.

However this method is also applicable to developments at non-fixed points (in this case the D is no more powers of \( f_1 \) but still a diagonal matrix). I now realize that this method usually would depend on the development point. For example if we consider \( f(x)=\sqrt{2}^x \) with the fixed points 2 and 4. And we start with the diagonalization at development point 2 we get the regular iteration at 2. If we move the development point continuously to 4 (which's regular iteration is different from the one at 2) the iterates must have changed ...

Quote:
It does, but of course eigenvectors are only unique up to scaling, so I suppose you could think of it as a question of convention rather than uniqueness.
Oh, see: The regular Abel function is only determined up to an additive constant and the regular Schroeder function is only determined up to a multiplicative constant. In our case of slog we simply fix one Abel function by the condition \( \text{slog}(1)=0 \).

Quote:While I was doing this I noticed something very interesting. We know the relationship between the Abel and Schroeder function is \( \sigma_f(x) = f_1^{\alpha_f(x)} \), which means the inverse relationship is:
\(
\begin{tabular}{rl}
\sigma_f(x) & = (f_1)^{\alpha_f(x)} \\
\sigma_f(\alpha_f^{-1}(x)) & = (f_1)^{x} \\
\alpha_f^{-1}(x) & = \sigma_f^{-1}\left((f_1)^{x}\right) \\
\end{tabular}
\)
and replacing f with \( DE_h(x) = h^x-1 \), we get

\(
\begin{tabular}{rl}
\alpha_{DE}^{-1}(x)
& = \sigma_{DE}^{-1}\left(\ln(h)^{x}\right) \\
& = \ln(h)^x + (\ln(h)^x)^2 \frac{\ln(h)}{2(\ln(h)-1)} + (\ln(h)^x)^3 \frac{\ln(h)^2(\ln(h)+2)}{6(\ln(h)-1)^2(\ln(h)+1)} + \cdots \\
& = e^{x\ln(\ln(h))} + e^{2x\ln(\ln(h))} \frac{\ln(h)}{2(\ln(h)-1)} + e^{3x\ln(\ln(h))} \frac{\ln(h)^2(\ln(h)+2)}{6(\ln(h)-1)^2(\ln(h)+1)} + \cdots
\end{tabular}
\)

because \( DE'(0) = \ln(h) \), and because the matrix P represents the inverse Schroeder function. What I find interesting about this is that it is almost a Fourier expansion of the exponential of iteration of DE, and that it is almost easier to compute than the Schroeder function, since you don't even need to invert P!

Uff, can you just say what \( h \) and \( DE \) is?
Reply
#8
Ok. h is the base of the decremented exponential, and DE is the decremented exponential function \( x \mapsto h^x-1 \). The reason why I chose the symbol h is because when you use iterated decremented exponentials to find iterated exponentials, the bases obey the h-root-h relationship: \( \exp_b^{\circ t}(x) = h(DE_h^{\circ t}(x/h - 1)+1) \) as I mentioned here and you mentioned here where \( b=h^{1/h} \), and thus \( h={}^{\infty}b \) which is also the symbol Galidakis uses for the infinitely iterated exponential.

Andrew Robbins
Reply
#9
Thanks for refreshing my memory Smile
Yes, for regular iteration of power series it is indeed is easier to compute the inverse of the Abel function, which is however nearly the same as to compute the iterates \( f^{\circ t}(x) \). The Abel function has also a singularity at 0.
Reply
#10
As we now have the limit formula for the regular slog it is time to publish also the corresponding powerseries for the regular slog. Interestingly the computation is in a certain way similar to the computation of the natural Abel function and Andrew's slog.
This is the good thing about regular tetration that there is a well developed theory about direct compuation (limit formula) and about power series computation.

So let us start with the regular Schroeder function of a given powerseries \( f \) at 0 with fixed point at 0.

A Schroeder function \( \sigma \) satisfies
\( \sigma(f(x))=c\sigma(f(x)) \) (where \( c=f_1=f'(0)>0,\neq 1 \))

Let us write this with the Bell matrix (\( m \)th row contains the coefficients of the \( m \)-th power of the function) \( S \) for \( \sigma \) and \( F \) for \( f \). As \( f \) and \( \sigma \) dont have a constant/0th coefficient the matrix is correspondingly stripped:

\( FS=cS \)
\( (F-cI)S=0 \)
we only need to consider the first row of \( S \) which is \( \vec{\sigma} \):
\( (F-cI)\vec{\sigma}=0 \)

E.g., truncation to 4:
\(
\begin{pmatrix}
c-c & 0 & 0 & 0\\
f_2 & c^2-c & 0 & 0\\
f_3 & {f^2}_3 & c^3-c & 0\\
f_4 & {f^2}_4 & {f^3}_4 & c^4-c
\end{pmatrix}
\begin{pmatrix}
\sigma_1\\\sigma_2\\\sigma_3\\\sigma_2
\end{pmatrix}
=\begin{pmatrix}0\\0\\0\\0\end{pmatrix}
\)

We see that the first row is 0 and needs to be chopped this gives freedom up to a multiplicative constant for \( \sigma \) (which is anyway known for Schroeder functions) and we decide to choose \( \sigma_1=\pm 1 \) depending on whether \( c>1 \) or \( c<1 \) and from which side we approach the fixed point. This then leads to the equation with the matrix \( F' \), which is \( F \) with removed first row and column

\( F'(\sigma_2,\sigma_3,\dots)^T=\mp(f_2,f_3,\dots)^T \), eg.

\(
\begin{pmatrix}
c^2-c & 0 & 0\\
{f^2}_3 & c^3-c & 0\\
{f^2}_4 & {f^3}_4 & c^4-c
\end{pmatrix}
\begin{pmatrix}
\sigma_2\\\sigma_3\\\sigma_2
\end{pmatrix}
=-\begin{pmatrix}f_2\\f_3\\f_4\end{pmatrix}
\)

However we dont need an equation solver to solve this system, because we chopped off the first line and column and not the last line and the first column, as in Andrew's slog; we can solve it by hand:
\( \sigma_{k} = \left(\pm f_k + \sum_{i=2}^{k-1} {f^i}_k \sigma_i\right)/\left(c-c^k\right) \).

Also the solution of this equation system does not depend on the truncation size as it is with the slog. But of course this becomes relativated by needing a fixed point.

So we have a formula for the powerseries of the regular Schroeder function. Then the regular Abel function is just \( \alpha_f(x)=\log_c(\sigma_f(x)) \).

Let us apply this to \( b^x \). First we have to move the fixed point \( a \) to 0 by conjugation: \( f(x)=b^{x+a}-a=ab^x-a=ae^{x\ln(b)}-a \)
\( f \) has the coefficients:
\( f_k = a\frac{\ln(b)^k}{k!} \), \( f_0=0 \)

\( f^n(x)=a^n \sum_{m=0}^n (-1)^{n-m}\left(n\\m\right) e^{x\ln(b)m} \)

It has the coefficients
\( {f^n}_k = a^n \sum_{m=0}^n (-1)^{n-m}\left(n\\m\right)\frac{\ln(b)^k m^k}{k!}=a^n\frac{\ln(b)^k}{k!}\sum_{m=0}^n (-1)^{n-m}\left(n\\m\right)m^k \)

So
\( \sigma_k = \frac{\ln(b)^k}{k!\left(c-c^k\right)}\left(a + \sum_{n=2}^{k-1} a^n \sigma_n\sum_{m=0}^n (-1)^{n-m}\left(n\\m\right)m^k\right) \)

where \( c=f'(0)=a\ln(b)b^0=\ln(b^a)=\ln(a) \)

So the Abel function of \( f=\tau_a^{-1}\circ\exp_b\circ\tau_a \) is \( \alpha_f(x)=\log_{\ln(a)} (\sigma(x)) \) and so the
Abel function of \( b^x \) is \( \alpha(x)=\log_{\ln(a)}(\sigma(x-a)) \)

However it seems as if the convergence radius of \( \sigma \) is just \( a \). So you can not use this formula exclusively to plot for example the regular Abel function of \( \sqrt{2}^x \) in the range -1 to 1.9, it does not converge at 0. A numeric comparison with the natural slog will hopefully follow later.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
Question E^^.5 and Slog(e,.5) Catullus 7 2,913 07/22/2022, 02:20 AM
Last Post: MphLee
Question Slog(Exponential Factorial(x)) Catullus 19 7,447 07/13/2022, 02:38 AM
Last Post: Catullus
Question Slog(x^^^2) Catullus 1 1,043 07/10/2022, 04:40 AM
Last Post: JmsNxn
Question Slog(e4) Catullus 0 872 06/16/2022, 03:27 AM
Last Post: Catullus
  A support for Andy's (P.Walker's) slog-matrix-method Gottfried 4 7,724 03/08/2021, 07:13 PM
Last Post: JmsNxn
  Some slog stuff tommy1729 15 32,186 05/14/2015, 09:25 PM
Last Post: tommy1729
  Regular iteration using matrix-Jordan-form Gottfried 7 18,804 09/29/2014, 11:39 PM
Last Post: Gottfried
  A limit exercise with Ei and slog. tommy1729 0 4,482 09/09/2014, 08:00 PM
Last Post: tommy1729
  A system of functional equations for slog(x) ? tommy1729 3 10,668 07/28/2014, 09:16 PM
Last Post: tommy1729
  slog(superfactorial(x)) = ? tommy1729 3 10,775 06/02/2014, 11:29 PM
Last Post: tommy1729



Users browsing this thread: 1 Guest(s)