Posts: 1,412
Threads: 91
Joined: Aug 2007
Perhaps we can come to a conclusion whether regularly iterating at the different fixed points of  always yields the same solution.
I will start with a post about hyperbolicity of the fixed points of  .
Posts: 1,412
Threads: 91
Joined: Aug 2007
Proposition. All fixed points of  are hyperbolic, i.e. |\neq 0,1) for each complex  with  .
Clearly =\exp(a)=a) . So we want to show that  .
We can exclude the case  as this implies  and we know that 0 is not a fixed point of ) .
For the other case we set +ir\sin(\alpha)) and get
+ir\sin(\alpha)}=e^{r\cos(\alpha)}e^{ir\sin( \alpha )}) and hence the equation system:
=r\cos(\alpha)) (1) and ) (2).
We square both equation and add them:
^2}) .
But beware, this is only a necessary condition on the fixed points. The fixed points lie discretely on the complex plane. Not every point satisfying this equation is a fixed point.
But from this condition we can look what happens for  . For  we get  and we see that both values of  do not satisfy equation (2). So there is no fixed point with  .
Posts: 1,412
Threads: 91
Joined: Aug 2007
09/08/2007, 11:36 AM
(This post was last modified: 09/08/2007, 11:43 AM by bo198214.)
Now let us take it a bit further for a general real base  .
Proposition. The only real base  for which the complex function  has a fixed point  with |=1) is  and in this case the only such fixed point is  .
Simalarly to the previous post we assume a fixed point being given by
 then for parabolicity we have to prove
and we have the equation system:
r\cos(\alpha)=\ln( r)) (1) and r\sin(\alpha)=\alpha) (2)
Both equation squared and added yields
r)^2=\ln ( r)^2+\alpha^2) and hence
For the case  : ) and =-\ln(\ln(b))) and so
))^2}) .
Now we put this into equation (1) with )) :
We substitute  ,
=\pm\sqrt{1-y^2}) . But the functions on both sides are well known. The right side is a circle with radius 1 and the left side is always above or below the right side in the region  . So equality happens exactly at  .
This implies =x=\pm 1) however only =1) satisfies equation (1). Then )=e^{1/e}) is the only real base for which  has a parabolic fixed point and this fixed point is given by  and =1) and is hence the real number  .
Posts: 1,412
Threads: 91
Joined: Aug 2007
Proposition. All non-real fixed points  of  ,  are repelling, i.e. |>1) .
With the previous considerations:
|=\ln(b)|\exp_b(a)|=\ln(b)|a|=\ln(b)r=:s)
(1) =\ln( r))
(2)
=\frac{\alpha}{s}) .
If  (  non-real) is a solution of the above equation then must  which is clear from comparing the graphs of both sides.
Posts: 1,412
Threads: 91
Joined: Aug 2007
09/09/2007, 12:50 PM
(This post was last modified: 09/09/2007, 12:56 PM by bo198214.)
I dont know whether I am the first one who realizes that regularly iterating  at a complex fixed point yields real coefficients! Moreover they do not depend on the chosen fixed point!
To be more precise:
If we have any fixed point  of  , then the power series  (where :=x+a) ) is a power series with fixed point 0 and with first coefficient =\exp(a)=a) , particularly the series is non-real. By the previous considerations the fixed point is repelling so we have the standard way of hyperbolic iteration (on the main branch) which yields again a non-real power series. If we however afterwards apply the inverse transformation
we get back real coefficients and they do not depend on the fixed point  . Though I can not prove it yet, this seems to be quite reliable.
And of course those coefficients are equal to those obtained by the matrix operator method, which in turn equals Andrew's solution after transformation.
Posts: 440
Threads: 31
Joined: Aug 2007
Posts: 1,412
Threads: 91
Joined: Aug 2007
09/11/2007, 06:16 PM
(This post was last modified: 09/11/2007, 06:17 PM by bo198214.)
First of all I have to withdraw my assertion. It was based on an error in my computations  And anyway to good to be true
jaydfox Wrote:And I'm not entirely sure what you mean by "power series".
A powerseries in the usual sense, i.e.
=\sum_{n=0}^\infty f_n x^n)
(developed at 0).
If  is analytic then you get the coefficients via
}(0)}{n!}) .
If you develop  at the point  (given that  is in the radius of convergence)
=\sum_{n=0}^\infty \alpha_n (x-a)^n) you can compute the coefficients  via:
_n = \sum_{m=n}^\infty \left(m\\n\right) f_m a^{m-n})
You can verify this by explicitely expanding (x)=f(x+a)=\sum_{n=0}^\infty f_n (x+a)^n) and regather the powers of  .
By  you move the fixed point  to 0. And if you have the fixed point at 0 (which means nothing else than  ), then you can do the usual regular iteration. For example if you want to compute  you simply need to solve the equation system:
Here I showed how to solve this equation for  (hyperbolic case).
If  then there is exactly one solution  (without involving limits but via a recurrrence formula) to this equation system.
Similarely for each other  in  .
So the fractional iterations of a (formal) power series are uniquely determined (whether the obtained powerseries has a radius of convergence at all is another question). And this can be continuously extended to real iteration counts. You can also get the real iteration count  by a finite formula, namely by solving the equation system (by recurrence):
 with the starting condition  .
In the complex numbers there are more than one regular solution, as  has a whole set of solutions.
Posts: 440
Threads: 31
Joined: Aug 2007
Perhaps I'm missing the point of the tau function (other than to move the fixed point to 0 for building a power series). Granted, in order to take advantage of the ability (in the immediate vicinity off the fixed point) to continuously iterate exponentiations by using continuously iterated multiplication (vanilla exponentiation), we need some transformation that moves large numbers (e.g., 1) down into the "well".
It seems like you are using tau to move points to the vicinity of the fixed point, then using the inverse of tau to move the fixed point to 0, iterating, moving the fixed point back to its proper location, then moving the transformed point back to its new location. If I'm reading this correctly, I don't think your tau function is the right way to move points towards the fixed point. Preferably, it should be possible with a single function to drive all points towards the fixed point. For base eta, this is done by taking logarithms (above the asymptote) or exponentials (below the asymptote) to move points near e, then performing continuous parabolic iteration, then taking the inverse of the iterated operation to get back to the proper point.
My first try with base e was to use iterated logarithms. Unfortunately, logarithms introduce an undesirable fractal pattern that distorts the spacing between points, because integer tetrates of the base (0, 1, b, b^2, etc.) will be connected out at infinity.
My current hypothesis, based on computing Andrew's slog for complex numbers within the radius of convergence, is that the "proper" way to transform large numbers into infinitesimal numbers (relative to the fixed point) is to use imaginary iterations of exponentiation. This allows us to move zero towards the fixed point, bypassing the singularity of the logarithm. In other words, if +1 iteration is exponentiation, and -1 iteration is a logarithm, then we need a formula (and a name!) for i iteration. Of course, currently the only way I have of generating such a formula is by using the first derivative of a tetration solution. But deriving a formula for performing imaginary iterations from tetration and then using it to derive a tetration solution is bad circular reasoning. At best it proves internal consistency.
I'm wondering, therefore, whether it's possible to derive the imaginary equivalent of something between an exponentiation and a logarithm, not quite either but related to both by being halfway between them and out to the side, so to speak.
If we could independently derive such a formula, and if we could be sure of its uniqueness, then we could use it to derive a solution to tetration. The fixed parabolic point for base eta was easy. The fixed hyperbolic points of bases between 1 and eta were harder, but still relatively easy.
Bases between e^-e and 1 are harder still, but on the whole still easy. Bases between 0 and e^-e are harder yet, and I haven't taken the time to prove numerically what I'm already sure of conceptually.
But for bases greater than eta, where the fixed point is complex... These really are hard. After all the time and thought I've poured into trying to understand continuous exponentiation of base e, and how the fixed point fits in, looking back at all the bases between 0 and eta is like studying calculus and then looking back at algebra.
And so far I'm only even concerning myself with real bases. The next challenge, if and after we solve bases above eta, is to solve the general case for complex bases, including making a determination whether certain bases are strictly unsolvable (no continuously differentiable solution).
~ Jay Daniel Fox
Posts: 1,412
Threads: 91
Joined: Aug 2007
09/12/2007, 09:54 AM
(This post was last modified: 09/12/2007, 09:55 AM by bo198214.)
jaydfox Wrote:Perhaps I'm missing the point of the tau function (other than to move the fixed point to 0 for building a power series). I think so.
Quote:It seems like you are using tau to move points to the vicinity of the fixed point, then using the inverse of tau to move the fixed point to 0, iterating, moving the fixed point back to its proper location, then moving the transformed point back to its new location. If I'm reading this correctly, I don't think your tau function is the right way to move points towards the fixed point.
No, here the coefficients of the powerseries (or the Taylor development) is the interesting thing, not the value of the function.
We have a unique (up to  ) regular (continuous) iteration (coefficients of the iterated power series) at a fixed point, which can be finitely computed by the coefficients at that fixed point.
So if we want to compute the coefficients of a Taylor expansion of the continously iterated function at a non-fixed point we first compute the coefficients at the fixed point (which is a non-finite operation in terms of the original coefficients, because it involves limits), then regularly iterating there, getting new coefficients and transforming these coefficients back to the original point. This is the sense of the formula ^{\circ t}\circ \tau_a^{-1}) .
There are also formulas to directly compute a regular iterate which seems more in your interest.
This is the real number version: If you have an attracting fixed point at 0, i.e. =0) and  for ) , then the regular iterate is:
=\lim_{n\to\infty} f^{\circ -n}(q^t f^{\circ n}(x)))
In our case (  ) we have repelling fixed points so  has attracting fixed points, then the formula is
=\lim_{n\to\infty} f^{\circ n}(q^t f^{\circ -n}(x))) .
So now our fixed point  is not at 0. So we consider
 which has a repelling fixed point at 0.
Then let =f'(a)=:r) and use our above formula:
=\lim_{n\to\infty} g^{\circ n}(r^t g^{\circ -n}(x))) .
=\lim_{n\to\infty} f^{\circ n}(a(1-r^t) + r^t f^{\circ -n}(x)), r=f'(a)) .
For an analytic function  the limit formula yields again an analytic function and this function is exactly the function described by the coefficient formula, thatswhy they both are called regular iteration.
The regular iteration is the only one that has no singularity at the fixed point.
One also can immediatly see that the obove formula for  has singularities at ) .
For example Kneser developed the regular solution at the first fixed point  . He also observed the singularities and of course that the solution had complex values for real arguments.
Thatswhy he modified his solution with a holomorphic transformation  :  so that it yielded real values for real arguments and no singularities at the real line.
Of course his new solution was no more regular and hence had a singularity at the fixed point  . And so will have every real solution at every fixed point, because no regular solution at some fixed point will be real (though strictly this was not verified yet).
Posts: 124
Threads: 40
Joined: Aug 2007
bo198214 Wrote:I dont know whether I am the first one who realizes that regularly iterating at a complex fixed point yields real coefficients! Moreover they do not depend on the chosen fixed point!
I don't think that is true, if I understand you. Just consider the term
^n ) in ^n \; (1-a) + \ldots ) .
As a side note, consider any two fixed points  for the same  , then the fixed point commute under exponentiation  .
|