Iterating at fixed points of b^x
#1
Perhaps we can come to a conclusion whether regularly iterating at the different fixed points of \( b^x \) always yields the same solution.
I will start with a post about hyperbolicity of the fixed points of \( e^x \).
#2
Proposition. All fixed points of \( e^x \) are hyperbolic, i.e. \( |\exp'(a)|\neq 0,1 \) for each complex \( a \) with \( e^a=a \).

Clearly \( \exp'(a)=\exp(a)=a \). So we want to show that \( |a|\neq 0,1 \).
We can exclude the case \( |a|=0 \) as this implies \( a=0 \) and we know that 0 is not a fixed point of \( \exp(x) \).

For the other case we set \( a=re^{i\alpha}=r\cos(\alpha)+ir\sin(\alpha) \) and get
\( re^{i\alpha}=e^{r\cos(\alpha)+ir\sin(\alpha)}=e^{r\cos(\alpha)}e^{ir\sin( \alpha )} \) and hence the equation system:
\( \ln ( r )=r\cos(\alpha) \) (1) and \( \alpha=r\sin(\alpha) \) (2).
We square both equation and add them:
\( \ln ( r)^2+\alpha^2=r^2 \cos(\alpha)^2+ r^2\sin(\alpha)^2=r^2 \)
\( \alpha=\pm\sqrt{r^2-\ln ( r)^2} \).
But beware, this is only a necessary condition on the fixed points. The fixed points lie discretely on the complex plane. Not every point satisfying this equation is a fixed point.

But from this condition we can look what happens for \( |a|=r=1 \). For \( r=1 \) we get \( \alpha=\pm 1 \) and we see that both values of \( \alpha \) do not satisfy equation (2). So there is no fixed point with \( |a|=1 \).
#3
Now let us take it a bit further for a general real base \( b>1 \).

Proposition. The only real base \( b>1 \) for which the complex function \( b^z \) has a fixed point \( a \) with \( |\exp_b'(a)|=1 \) is \( b=e^{1/e} \) and in this case the only such fixed point is \( e \).

Simalarly to the previous post we assume a fixed point being given by
\( a=re^{i\alpha} \) then for parabolicity we have to prove

\( |\exp_b'(a)|=\ln(b)|\exp_b(a)|=\ln(b)|a|=\ln(b)r=:s \)

and we have the equation system:

\( \ln(b)r\cos(\alpha)=\ln( r) \) (1) and \( \ln(b)r\sin(\alpha)=\alpha \) (2)

Both equation squared and added yields
\( (\ln(b)r)^2=\ln ( r)^2+\alpha^2 \) and hence

\( \alpha=\pm\sqrt{(\ln(b)r)^2-\ln( r)^2} \)

For the case \( s=1 \): \( r=1/\ln(b) \) and \( \ln( r)=-\ln(\ln(b)) \) and so
\( \alpha=\pm\sqrt{1-(\ln(\ln(b)))^2} \).

Now we put this into equation (1) with \( x=-\ln(\ln(b)) \):

\( \cos(\pm\sqrt{1-x^2})=x \)

We substitute \( \sqrt{1-x^2}=:y \), \( x=\pm\sqrt{1-y^2} \)
\( \cos(\pm y)=\pm\sqrt{1-y^2} \). But the functions on both sides are well known. The right side is a circle with radius 1 and the left side is always above or below the right side in the region \( y=-1..1 \). So equality happens exactly at \( y=0 \).

This implies \( \ln( r)=x=\pm 1 \) however only \( \ln( r)=1 \) satisfies equation (1). Then \( b=\exp( \exp( -x))=e^{1/e} \) is the only real base for which \( \exp_b \) has a parabolic fixed point and this fixed point is given by \( \alpha=0 \) and \( \ln( r)=1 \) and is hence the real number \( e \).
#4
Proposition. All non-real fixed points \( a \) of \( b^x \), \( b>1 \) are repelling, i.e. \( |\exp_b'(a)|>1 \).

With the previous considerations:
\( a=re^{i\alpha} \)
\( |\exp_b'(a)|=\ln(b)|\exp_b(a)|=\ln(b)|a|=\ln(b)r=:s \)
(1) \( s\cos(\alpha)=\ln( r) \)
(2) \( s\sin(\alpha)=\alpha \)

\( \sin(\alpha)=\frac{\alpha}{s} \).

If \( \alpha>0 \) (\( a \) non-real) is a solution of the above equation then must \( s>1 \) which is clear from comparing the graphs of both sides.
#5
I dont know whether I am the first one who realizes that regularly iterating \( \exp \) at a complex fixed point yields real coefficients! Moreover they do not depend on the chosen fixed point!

To be more precise:
If we have any fixed point \( a \) of \( \exp \), then the power series \( \tau_{-a}\circ \exp\circ \tau_a \) (where \( \tau_a(x):=x+a \)) is a power series with fixed point 0 and with first coefficient \( \exp'(a)=\exp(a)=a \), particularly the series is non-real. By the previous considerations the fixed point is repelling so we have the standard way of hyperbolic iteration (on the main branch) which yields again a non-real power series. If we however afterwards apply the inverse transformation

\( \exp^{\circ t}:=\tau_a\circ (\tau_a^{-1}\circ \exp\circ \tau_a)^{\circ t}\circ \tau_a^{-1} \)

we get back real coefficients and they do not depend on the fixed point \( a \). Though I can not prove it yet, this seems to be quite reliable.

And of course those coefficients are equal to those obtained by the matrix operator method, which in turn equals Andrew's solution after transformation.
#6
bo198214 Wrote:I dont know whether I am the first one who realizes that regularly iterating \( \exp \) at a complex fixed point yields real coefficients! Moreover they do not depend on the chosen fixed point!

To be more precise:
If we have any fixed point \( a \) of \( \exp \), then the power series \( \tau_{-a}\circ \exp\circ \tau_a \) (where \( \tau_a(x):=x+a \)) is a power series with fixed point 0 and with first coefficient \( \exp'(a)=\exp(a)=a \), particularly the series is non-real. By the previous considerations the fixed point is repelling so we have the standard way of hyperbolic iteration (on the main branch) which yields again a non-real power series. If we however afterwards apply the inverse transformation

\( \exp^{\circ t}:=\tau_a\circ (\tau_a^{-1}\circ \exp\circ \tau_a)^{\circ t}\circ \tau_a^{-1} \)

we get back real coefficients and they do not depend on the fixed point \( a \). Though I can not prove it yet, this seems to be quite reliable.

And of course those coefficients are equal to those obtained by the matrix operator method, which in turn equals Andrew's solution after transformation.

Somehow you lost me here. I don't doubt what you say; I'm just not sure what you meant. Perhaps my problem is that I'm misunderstanding how to apply your tau function. And I'm not entirely sure what you mean by "power series". Are you talking about the coefficients of a Taylor series expansion, or are you talking about a series of successive exponentials (i.e., a series of powers). I think I've possibly been sloppy in my own usage of the term "power series", which is leading to my present confusion.
~ Jay Daniel Fox
#7
First of all I have to withdraw my assertion. It was based on an error in my computations Sad And anyway to good to be true Wink

jaydfox Wrote:And I'm not entirely sure what you mean by "power series".

A powerseries in the usual sense, i.e.
\( f(x)=\sum_{n=0}^\infty f_n x^n \)
(developed at 0).

If \( f \) is analytic then you get the coefficients via
\( f_n = \frac{f^{(n)}(0)}{n!} \).

If you develop \( f \) at the point \( a \) (given that \( a \) is in the radius of convergence)
\( f(x)=\sum_{n=0}^\infty \alpha_n (x-a)^n \) you can compute the coefficients \( \alpha_n \) via:
\( \alpha_n = (f\circ \tau_a)_n = \sum_{m=n}^\infty \left(m\\n\right) f_m a^{m-n} \)
You can verify this by explicitely expanding \( (f\circ \tau_a)(x)=f(x+a)=\sum_{n=0}^\infty f_n (x+a)^n \) and regather the powers of \( x \).

By \( \tau_a^{-1}\circ f\circ \tau_a \) you move the fixed point \( a \) to 0. And if you have the fixed point at 0 (which means nothing else than \( f_0=0 \)), then you can do the usual regular iteration. For example if you want to compute \( g=f^{\circ 1/2} \) you simply need to solve the equation system:
\( (g\circ g)_n = f_n \)
Here I showed how to solve this equation for \( f_1>0,f_1\neq 1 \) (hyperbolic case).
If \( f_1>0 \) then there is exactly one solution \( g \) (without involving limits but via a recurrrence formula) to this equation system.
Similarely for each other \( m \) in \( g^{\circ m}=f \).

So the fractional iterations of a (formal) power series are uniquely determined (whether the obtained powerseries has a radius of convergence at all is another question). And this can be continuously extended to real iteration counts. You can also get the real iteration count \( g=f^{\circ t} \) by a finite formula, namely by solving the equation system (by recurrence):
\( f\circ g=g\circ f \) with the starting condition \( g_1={f_1}^t \).

In the complex numbers there are more than one regular solution, as \( {f_1}^t \) has a whole set of solutions.
#8
Perhaps I'm missing the point of the tau function (other than to move the fixed point to 0 for building a power series). Granted, in order to take advantage of the ability (in the immediate vicinity off the fixed point) to continuously iterate exponentiations by using continuously iterated multiplication (vanilla exponentiation), we need some transformation that moves large numbers (e.g., 1) down into the "well".

It seems like you are using tau to move points to the vicinity of the fixed point, then using the inverse of tau to move the fixed point to 0, iterating, moving the fixed point back to its proper location, then moving the transformed point back to its new location. If I'm reading this correctly, I don't think your tau function is the right way to move points towards the fixed point. Preferably, it should be possible with a single function to drive all points towards the fixed point. For base eta, this is done by taking logarithms (above the asymptote) or exponentials (below the asymptote) to move points near e, then performing continuous parabolic iteration, then taking the inverse of the iterated operation to get back to the proper point.

My first try with base e was to use iterated logarithms. Unfortunately, logarithms introduce an undesirable fractal pattern that distorts the spacing between points, because integer tetrates of the base (0, 1, b, b^2, etc.) will be connected out at infinity.

My current hypothesis, based on computing Andrew's slog for complex numbers within the radius of convergence, is that the "proper" way to transform large numbers into infinitesimal numbers (relative to the fixed point) is to use imaginary iterations of exponentiation. This allows us to move zero towards the fixed point, bypassing the singularity of the logarithm. In other words, if +1 iteration is exponentiation, and -1 iteration is a logarithm, then we need a formula (and a name!) for i iteration. Of course, currently the only way I have of generating such a formula is by using the first derivative of a tetration solution. But deriving a formula for performing imaginary iterations from tetration and then using it to derive a tetration solution is bad circular reasoning. At best it proves internal consistency.

I'm wondering, therefore, whether it's possible to derive the imaginary equivalent of something between an exponentiation and a logarithm, not quite either but related to both by being halfway between them and out to the side, so to speak.

If we could independently derive such a formula, and if we could be sure of its uniqueness, then we could use it to derive a solution to tetration. The fixed parabolic point for base eta was easy. The fixed hyperbolic points of bases between 1 and eta were harder, but still relatively easy.

Bases between e^-e and 1 are harder still, but on the whole still easy. Bases between 0 and e^-e are harder yet, and I haven't taken the time to prove numerically what I'm already sure of conceptually.

But for bases greater than eta, where the fixed point is complex... These really are hard. After all the time and thought I've poured into trying to understand continuous exponentiation of base e, and how the fixed point fits in, looking back at all the bases between 0 and eta is like studying calculus and then looking back at algebra.

And so far I'm only even concerning myself with real bases. The next challenge, if and after we solve bases above eta, is to solve the general case for complex bases, including making a determination whether certain bases are strictly unsolvable (no continuously differentiable solution).
~ Jay Daniel Fox
#9
jaydfox Wrote:Perhaps I'm missing the point of the tau function (other than to move the fixed point to 0 for building a power series).
I think so.

Quote:It seems like you are using tau to move points to the vicinity of the fixed point, then using the inverse of tau to move the fixed point to 0, iterating, moving the fixed point back to its proper location, then moving the transformed point back to its new location. If I'm reading this correctly, I don't think your tau function is the right way to move points towards the fixed point.

No, here the coefficients of the powerseries (or the Taylor development) is the interesting thing, not the value of the function.
We have a unique (up to \( 2\pi i k \)) regular (continuous) iteration (coefficients of the iterated power series) at a fixed point, which can be finitely computed by the coefficients at that fixed point.

So if we want to compute the coefficients of a Taylor expansion of the continously iterated function at a non-fixed point we first compute the coefficients at the fixed point (which is a non-finite operation in terms of the original coefficients, because it involves limits), then regularly iterating there, getting new coefficients and transforming these coefficients back to the original point. This is the sense of the formula \( \tau_a\circ (\tau_a^{-1}\circ f\circ \tau_a)^{\circ t}\circ \tau_a^{-1} \).

There are also formulas to directly compute a regular iterate which seems more in your interest.
This is the real number version: If you have an attracting fixed point at 0, i.e. \( f(0)=0 \) and \( 0<q<1 \) for \( q=f'(0) \), then the regular iterate is:
\( f^{\circ t}(x)=\lim_{n\to\infty} f^{\circ -n}(q^t f^{\circ n}(x)) \)
In our case (\( f=\exp \)) we have repelling fixed points so \( f^{-1}=\ln \) has attracting fixed points, then the formula is
\( f^{\circ t}(x)=\lim_{n\to\infty} f^{\circ n}(q^t f^{\circ -n}(x)) \).
So now our fixed point \( a \) is not at 0. So we consider
\( g:=\tau_a^{-1}\circ f\circ \tau_a \) which has a repelling fixed point at 0.
Then let \( g'(0)=f'(a)=:r \) and use our above formula:
\( g^{\circ t}(x)=\lim_{n\to\infty} g^{\circ n}(r^t g^{\circ -n}(x)) \).
\( (\tau_a^{-1}\circ f^{\circ t}\circ\tau_a)(x)=\lim_{n\to\infty} \tau_a^{-1}\circ f^{\circ n}\circ \tau_a(r^t (\tau_a^{-1} \circ f^{\circ -n}\circ \tau_a)(x)) \)
\( f^{\circ t}=\lim_{n\to\infty} f^{\circ n}\circ \tau_a(r^t (\tau^{-1}_a\circ f^{\circ -n})(x)) \)
\( f^{\circ t}(x)
=\lim_{n\to\infty} f^{\circ n}(a+(r^t(-a+(f^{\circ -n}(x)))) \)

\( f^{\circ t}(x)=\lim_{n\to\infty} f^{\circ n}(a(1-r^t) + r^t f^{\circ -n}(x)), r=f'(a) \).

For an analytic function \( f \) the limit formula yields again an analytic function and this function is exactly the function described by the coefficient formula, thatswhy they both are called regular iteration.
The regular iteration is the only one that has no singularity at the fixed point.

One also can immediatly see that the obove formula for \( f=\exp \) has singularities at \( \exp^{\circ k}(0) \).
For example Kneser developed the regular solution at the first fixed point \( a \). He also observed the singularities and of course that the solution had complex values for real arguments.
Thatswhy he modified his solution with a holomorphic transformation \( \alpha \): \( f_2^{\circ t}=\alpha^{-1}\circ f^{\circ t}\circ \alpha \) so that it yielded real values for real arguments and no singularities at the real line.
Of course his new solution was no more regular and hence had a singularity at the fixed point \( a \). And so will have every real solution at every fixed point, because no regular solution at some fixed point will be real (though strictly this was not verified yet).
#10
bo198214 Wrote:I dont know whether I am the first one who realizes that regularly iterating \( \exp \) at a complex fixed point yields real coefficients! Moreover they do not depend on the chosen fixed point!

I don't think that is true, if I understand you. Just consider the term
\( \ln(a)^n \) in \( \;^{n}b = a + \ln(a)^n \; (1-a) + \ldots \).

As a side note, consider any two fixed points \( a_j,a_k \) for the same \( b \), then the fixed point commute under exponentiation \( {a_j}^{a_k}= {a_k}^{a_j} \).
Daniel


Possibly Related Threads…
Thread Author Replies Views Last Post
  Down with fixed points! Daniel 1 561 04/29/2023, 11:02 PM
Last Post: tommy1729
  [To Do] Basics of Iterating Relations MphLee 0 473 12/27/2022, 07:57 PM
Last Post: MphLee
  Iteration with two analytic fixed points bo198214 62 21,252 11/27/2022, 06:53 AM
Last Post: JmsNxn
  Iterating at eta minor JmsNxn 22 5,725 08/05/2022, 02:01 AM
Last Post: JmsNxn
Question The Different Fixed Points of Exponentials Catullus 22 7,185 07/24/2022, 12:22 PM
Last Post: bo198214
Question Continuously Iterating Modular Arithmetic Catullus 17 4,722 07/22/2022, 02:16 AM
Last Post: MphLee
  Quick way to get the repelling fixed point from the attracting fixed point? JmsNxn 10 3,862 07/22/2022, 01:51 AM
Last Post: JmsNxn
  iterating z + theta(z) ? [2022] tommy1729 5 2,099 07/04/2022, 11:37 PM
Last Post: JmsNxn
Question Two Attracting Fixed Points Catullus 4 1,924 07/04/2022, 01:04 PM
Last Post: tommy1729
  iterating exp(z) + z/(1 + exp(z)) tommy1729 0 2,460 07/17/2020, 12:29 PM
Last Post: tommy1729



Users browsing this thread: 1 Guest(s)