Pictures of the Chi-Star
#1
Some number of years ago in 2011, I made some complex plane graphs of the inverse Schröder function super-imposed with Kneser's Chi-star function, but it turns out I never posted them online since I skipped right to the complex valued superfunction instead; http://math.eretrandre.org/tetrationforu...93#pid6193 Working directly with the Schröder and Chi-star function is probably more mathematically accessible.  So I thought I would put together a post with some pretty pictures of Kneser's Chi-star function.

Let's start with the \( \Psi \) (or Schröder function) for exp(z) developed at the complex fixed point, L~=0.318132 + 1.33724i; 
where exp(L)=L, and the multiplier \( \lambda \) at the fixed point is also L since \( \exp(L+\delta)\approx L+L\cdot\delta \Rightarrow\;\lambda=L \)

The defining equation for the Schröder function is \( \Psi(e^z) = \lambda\cdot \Psi(z) \).  Remarkably, this function maps iterated exponentiation base e to multiplication by \( \lambda \).  Of course \( \Psi(0) \) has what turns out to be a really complicated singularity.

Kneser's Chi-Star \( \chi(z) \) function maps the Schröder \( \Psi \) function of the real number line to get the Chi-star function.   \( \chi(z)=\Psi\circ\Re\;\; \) This is the Schröder of the real number line.  It is also probably the most natural first step in generating analytic real valued Tetration.  And it is a very pretty function.  From the definition of the \( \Psi \) then \( \chi(e^z)=\lambda\cdot\chi(z) \)

The first pretty picture I'm posting is the inverse Schröder \( \Psi^{-1}(z) \), covering the range of  \( \pm 30 ... \pm 20i \), and superimposed with Kneser's Chi-star \( \chi(z) \) function.  Of course, its difficult to grasp the details of this pretty picture.  Lets start with the definition.  Normally, one generates the Taylor series coefficients of the \( \Psi^{-1}(z) \) function iteratively; I usually work with numerical values of the coefficients. For reference, I include the closed form for the z^2 coefficient.  It is an entire function.

 \( \Psi^{-1}(\lambda z)=\exp(\Psi^{-1}(z))\;\;\;\Psi^{-1}(z)\approx \lambda+z+\frac{0.5z^2}{\lambda-1}+O(z^3)\;\;\; \)  This is the function in the complex plane graph picture below.

   

Here is another image that serves as a "key" to what the yellow segment superimposed on the picture above refers to.  You can see that where the green curve approximately meets the red curve is approximately zero.  Actually, the green segment ends at approximately -10^-78, and the red segment starts at +10^-78, and then the red segment continues until  approximately 1-10^-78.   Of course, there is a singularity at 0, so we can't extend the picture all the way to exactly zero!  There is also a singularity at 1.  And a singularity at e, and a singularity at \( e^{e} \) and a singularity at \( e^{e^e} \).... The Chi-star contour in the images covers roughly -infinity to Tet(6).  Each time you iterate exp(x), you jump to a new curved segment that is L times larger than the segment containing x.  Below, I show eight segments of the Chi-Star. 

   

If you could figure out how to map the various segments of the Chi-Star back to the real axis of the iterated Tetration function, then you would have a mathematical way to generate Tetration.  The next step in that process is to generate a superfunction for exp base e by taking \( \Psi^{-1}(\lambda^z)\; \) But this superfunction is not real valued at the real axis due to the singularities at \( \Psi(\exp^{\circ n} (0)) \).   But back to the Chi-star itself.  Let's look at the singularity near zero.  Can you guess where the singularity would extend to if we got even closer to zero?  Here, I extended the section near zero to +/- 1/Tet(5.5).   Notice that the red and green segments continue almost exactly together until perhaps surprisingly, they run almost coincident with the Tet(4..5) segment.  But then they make a soft u-turn, near 1/Tet(5), and then join up with the Tet(5..6) segment!  
   

Black in these complex plane graphs corresponds to zero, so wherever it goes, it would be "black" in the complex plane graph of the \( \Psi^{-1} \) function.  Eventually, you get the checkerboard pattern where the black and the white are right next to each other.  But the soft turnaround occurs where the function is still black.  The full image with the Chi-Star singularity extended as far as it can go is too busy to see the "star" in the Chi-Star.  I included the lower-right corner from the first image, with the soft turnaround near 1/Tet(5), where the function is near a singularity for Tet(5).  \( \ln(\ln(\frac{1}{\text{Tet(5)}}))=\text{Tet}(3)+\pi i \)  This compares with Tet(3)=3814279, where \( \Psi(\text{Tet(3)}) \) is a true singularity but \( \Psi(\text{Tet}(3)+\pi i) \) is only near the singularity.  So we get this turnaround where the true singularity continues on.
   
- Sheldon
#2
Very pretty pictures.

I thought I might add an interesting quip about how to calculate this complex super function. I recently ran into this formula working on difference equations, and it turned out to be equivalent to solving for super functions. (Funny how the brain works, when working on two entirely different problems, they happen to be intensely related.)

Define the sequence \( \zeta_n = \Psi^{-1}(e^{-n} \Psi(x_0)) \) where \( x_0 \) is fixed and is arbitrary, so long as \( \log^{\circ n}(x_0) \to L \). We can write an entire super function of exponentiation as

\( F(z)=\frac{1}{\Gamma(Lz)} (\sum_{n=0}^\infty \zeta_n \frac{(-1)^n}{n!(n+Lz)} + \int_1^\infty (\sum_{n=0}^\infty \zeta_n \frac{(-t)^n}{n!})t^{Lz-1}\,dt) \)

This expression converges FOR ALL complex values, which is REAL nice.

Naturally

\( e^{F(z)} = F(z+1) \),

but

\( F(z) \neq 0 \), and is not real valued on the real line.


I have equally so been wondering if this can somehow be morphed into a solution to tetration, but no luck so far. I'm still a little lost about how Kneser does it, but even how it looks, it doesn't seem like it'l apply in this scenario.



EDIT:

I thought I'd add that its periodic too.

Since this super function looks locally like \( F(z) = \Psi^{-1}(e^{Lz}\Psi(x_0)) \), it follows that \( F(z+\frac{2\pi i}{L}) = F(z) \)
#3
(05/28/2017, 08:46 PM)JmsNxn Wrote: Very pretty pictures.
...
Define the sequence \( \zeta_n = \Psi^{-1}(e^{-n} \Psi(x_0)) \) where \( x_0 \) is fixed and is arbitrary, so long as \( \log^{\circ n}(x_0) \to L \). We can write an entire super function of exponentiation as

\( F(z)=\frac{1}{\Gamma(Lz)} (\sum_{n=0}^\infty \zeta_n \frac{(-1)^n}{n!(n+Lz)} + \int_1^\infty (\sum_{n=0}^\infty \zeta_n \frac{(-t)^n}{n!})t^{Lz-1}\,dt) \)

This expression converges FOR ALL complex values, which is REAL nice.
....

There is a lot more to say about the pretty picture ... I am intrigued by the large islands of "black" which is where the \( \Psi^{-1}(z) \) function takes on nearly the value of zero,  And then those nearly zero black islands lead to islands of various shades of red corresponding to values near 1, e, e^e ... until Chaos takes over and the function becomes a checkerboard of black and white with arbitrarily large values and arbitrarily small values arbitrarily close to one another.  And then there is the incredibly complex singularity at \( \Psi(0) \) ....

It is interesting how  \( \Gamma \) and the integral got into your equation.  I should know just enough about the Gamma function to follow its derivation.  

As a practical matter, I have settled on using the \( \Psi^{-1}(z) \) formal Taylor series expansion, which works well, and is quick and easy to generate in a pari-gp program.  Not that math needs be practical.

For the inverse Schröder function, we are free to iterate \( z\mapsto \frac{z}{\lambda} \) as many times as we like before using the series to evaluate \( \Psi^{-1}(z)=\exp^{\circ n}\Psi^{-1}(z\lambda^{-n}) \).  Then the superfunction is \( F(y)=\Psi^{-1}(\lambda^y) \)

For a 44 term expansion, you get nearly 28 decimal digits of accuracy if |z|<1.

As far as the next step, Kneser's algorithm, I think that appears to be much harder for folks to understand.  So I thought I would focus on the basics of the \( \Psi \) function for awhile.  I think it would help to rewrite all of the relevant equations in terms of the more accessible \( \Psi \) Schröder and inverse Schröder functions... and limit using the intermediate superfunction equation as much as possible.
- Sheldon
#4
It's unfortunate that I don't have a faster converging expression that would obsolesce the more mechanical ''keep applying \( \exp \) until we're in a big enough region.'' Sadly every expression I come up with is either a slowly converging Newton series, or an even slower converging modified Mellin transform. I just thought it'd be nice to have a single equation that gives the super function, that is slightly easier to construct, something that seems to be lacking in the literature.


The derivation is actually pretty straight forward, however, to get there it requires the paper I'm rewriting (though it's never mentioned in that paper). It's all due to ramanujan's master theorem, so I can hardly take much credit.
#5
(05/28/2017, 08:46 PM)JmsNxn Wrote: Define the sequence \( \zeta_n = \Psi^{-1}(e^{-n} \Psi(x_0)) \) where \( x_0 \) is fixed and is arbitrary, so long as \( \log^{\circ n}(x_0) \to L \). We can write an entire super function of exponentiation as

\( F(z)=\frac{1}{\Gamma(Lz)} (\sum_{n=0}^\infty \zeta_n \frac{(-1)^n}{n!(n+Lz)} + \int_1^\infty (\sum_{n=0}^\infty \zeta_n \frac{(-t)^n}{n!})t^{Lz-1}\,dt) \)
...
\( e^{F(z)} = F(z+1) \), but \( F(z) \neq 0 \), and is not real valued on the real line...  I'm still a little lost about how Kneser does it, but even how it looks, it doesn't seem like it'l apply in this scenario.
...
this super function looks locally like \( F(z) = \Psi^{-1}(e^{Lz}\Psi(x_0)) \) ...

Then one can also take the log \( L=\lambda \) of the \( \Psi \) to generate the \( \alpha \) Abel function
\( \alpha(z)=\frac{\ln(\Psi(z))}{\lambda}\;\;\;\alpha(e^z)=\alpha(z)+1 \)
And your \( F(z) \) function is the complex valued superfunction; \( \alpha^{-1}(z)=\Psi^{-1}(e^{\lambda z})\; \)

Then the \( \alpha(z) \) of the real number line is the un-spiraled \( \chi \) function and one could superimpose it on the complex valued superfunction.   These two pictures are exactly analogous to the earlier pictures in the post.  Below is the complex valued superfunction, which is \( \Psi^{-1}(e^{\lambda z}) \) from -3 to +6 on the real axis and -3 to +2 on the Imaginary axis.  The green area was mapped from z near 0 from the \( \Psi^{-1}(z)\approx\ L \) region.  The yellow highlight is real axis from approximately \( \text{Tet}(-2)\to \text{Tet}(6) \) with a gap of about 10^-78 near the singularity at zero and a corresponding gap at other integer values.      

And here is the key showing what real numbers the yellow highlighter refers to.  
   

So now we want to map the yellow region (or colored sections in the 2nd picture) to the Tetration real axis between -2 and +6.  The yellow region can be extended infinitely in both directions.  And then we want to map everything "above" the yellow region to the upper half of the complex plane, while keeping the definition of \( \text{Tet}(z+1)=\exp(\text{Tet}(z)) \)  And that's what Kneser's construction does.  There are a lot more details, like how does Kneser generate such a mapping???   But your \( F(z)=\alpha^{-1}(z) \) function is the intermediate step on the way to real valued Tetration.  

In Jay's-description  and in Sheldon's-2011-post, Jay and I both added a lot of details about the (repeating) singularity.  But focusing on the complicated singularity comes at the expense of making the crucial repeating pattern harder for the reader to see.  You might also want to see  Henryk's-description
- Sheldon
#6
The part that was miraculous was mapping the region to the unit disk so that the transfer map \( z \mapsto z+1 \) gets sent to \( z \mapsto \tau(z) \) where \( \tau \) is an automorphism of the unit disk without fixed points. That to me was the genius of the method. I'm still a little unclear on how he does it, but I'm starting to see the general picture.
#7
(05/31/2017, 05:38 PM)JmsNxn Wrote: The part that was miraculous was mapping the region to the unit disk so that the transfer map \( z \mapsto z+1 \) gets sent to \( z \mapsto \tau(z) \) where \( \tau \) is an automorphism of the unit disk without fixed points. That to me was the genius of the method. I'm still a little unclear on how he does it, but I'm starting to see the general picture.

I'm going to describe the Riemann mapping as best as I understand it even though my understanding of how Kneser uses the RiemannMapping region is not complete.  You take the \( \alpha \) Abel function of the real number line to get the repeating yellow contour from the previous post.
\( \alpha(\Re);\;\;\;\alpha(z)=\frac{\ln(\Psi(z))}{\lambda} \)

And then to get Kneser's RiemannMapping region which is mapped to the unit circle, you need to multiply by \( 2\pi i \) so now its 2pi i periodic instead of unit periodic.  Finally, you take the exponent so all of the unit sub-segments are now mapped on top of each other.  This encloses an infinite region which includes the z=0 center corresponding to \( \Im(\infty) \).   And then you take the RiemannMapping of that region ...  If you accept all of that, then you have:

\( U(z)=\text{RiemannMapping}(e^{2\pi i \alpha(\Re)})\;\;\; \) It is valid to work with the line segment from 0 to 1 instead of the Real number line.  

I'm not sure if that is the correct Riemann mapping terminology, but \( U(z) \) represents the RiemannMapping unit circle function, with the requirement that \( U(0)=0 \), and that \( U(1) \) is the singularity; the rest of the unit circle is analytic.  My next step is to generate the \( z+\theta(z) \) function from the RiemannMapping.  You have to take the ln and divide by \( 2\pi i \) to get the repeating yellow region mapped to the real axis.

\( z+\theta(z)=\frac{\ln(U(e^{2\pi i z}))}{2\pi i } \)
\( \lim_{z \to i\infty}\theta(z)=k\;\; \) where k is a constant, and theta goes to k as imag(z) goes to infinity
\( \alpha^{-1}(z+\theta(z))\;\;\; \) Kneser's Tetration function which is real valued at the real axis...

Finally, here is the picture of the Riemann mapping region: \( e^{2\pi i \alpha(\Re)} \) 
   
- Sheldon
#8
That's even more enlightening! That's a great intuitive way of interpreting it. That's the best way I can interpret it so far, I just get the intuition of the matter. I definitely couldn't reproduce the proof, or teach a class on the proof, but I'm really starting to get how it works. It's so inventive it's mindblowing.

The more I think about how Kneser does the impossible with this, the more I imagine there's some ridiculously clever way of producing \( ^ze \) from \( ^z\eta \). Much like the base change formula, but I imagine it would have to be less direct. I have this sneaking hunch that \( ^z\eta \) is the key to a nice tetration for all bases, or at least the real positive line for \( b>1 \). The bounded case is just too simple and unbelievably well behaved, and it has to be a gateway, or must shed some light on the unbounded case. Especially \( \eta \) because \( ^z\eta \) is holomorphic on the exact same domain that a decent \( ^ze \) extension would be holomorphic. Namely \( \mathbb{C}/\{(-\infty,-2)\} \). There must be some sequence of steps which produce \( ^ze \) from \( ^z\eta \). That's why I even study the bounded hyper-operators, it has to produce the unbounded case in some ingenial manner. At least, that's the goal, lol.

Thanks a lot for these descriptions, Sheldon. That's why I've come here since highschool. Such a nice and "no question is a bad question" forum. I can't believe how much more rigorous and well-founded my arguments about hyper-operators have become since I've come here.  Too bad it's been less active lately.


Plus it's unreal that that tiny region becomes the half plane and it preserves the composition by \( e^z \) and it solves tetration. Who on god's earth could come up with that?
#9
(06/01/2017, 07:29 AM)JmsNxn Wrote: That's even more enlightening! That's a great intuitive way of interpreting it. That's the best way I can interpret it so far, I just get the intuition of the matter. I definitely couldn't reproduce the proof, or teach a class on the proof, but I'm really starting to get how it works. It's so inventive it's mindblowing.
.....
Thanks a lot for these descriptions, Sheldon. That's why I've come here since highschool. Such a nice and "no question is a bad question" forum. I can't believe how much more rigorous and well-founded my arguments about hyper-operators have become since I've come here.  Too bad it's been less active lately.

Plus it's unreal that that tiny region becomes the half plane and it preserves the composition by \( e^z \) and it solves tetration. Who on god's earth could come up with that?

Thanks a lot for your comments James!

My gut says very few folks understand Kneser's real valued Tetration construction.  Edit it turns out I still only partially understand Kneser too!  When I started this post, I wasn't expecting to try to explain Kneser at all.  I just figured I would post some pretty pictures of the Chi-star contour, and the inverse Schröder function.  The Schröder function and its inverse seems like a "safe" mathematically rigorous picture, and then the reader could imagine Kneser somehow finding a way to unspiral the Chi-star to generate real valued Tetration.  Maybe I made Kneser and real valued Tetration base(e) a little bit more accessible. 

Then the obvious next step is to explain \( z+\theta(z) \) where theta(z) is the 1-cyclic function I originally mistakenly thought was Kneser's Riemann mapping.  We start with this definition of the \( \theta(z) \)  from the RiemannMapping unit circle function.  The \( z+\theta(z) \) approach is mathetmatically equivalent to Kneser's approach, but it is different.

\( z+\theta(z)=\frac{\ln(U(e^{2\pi i z}))}{2\pi i } \)

Remember that U(0)=0, and it is analytic with a radius of convergence of 1.
\( U(z)=a_1 z+a_2 z^2 + a_3 z^3 ... = \sum_{n=1}^{\infty}a_n z^n \)

The equation for z+theta starts by taking the logarithm and dividing by 2pi i so lets rearrange terms a little before substituting \( z \mapsto e^{2\pi i z} \)

\( \frac{\ln(U(z))}{2\pi i}=\frac{1}{2\pi i}\ln\Big(z\cdot a_1\cdot(1+\sum_{n=1}^{\infty}\frac{a_{n+1}z^n}{a_1})\Big)=\frac{1}{2\pi i}\cdot\Big(\ln(z)+\ln(a_1)+\ln\Big(1+\sum_{n=1}^{\infty}\frac{a_{n+1}z^n}{a_1}\Big)\Big) \)


Since ln(1+x) has a simple formal power series, the right hand section also has a formal power series; \( b_1..b_n \) with a radius of convergence of 1.  b0 is the constant term.  I calculated b1 and b2, the first couple of terms of the formal power series.
\( \frac{\ln(U(z))}{2\pi i}=\frac{\ln(z)}{2\pi i}+\sum_{n=0}^{\infty}b_n z^n\;\;\;b_0=\frac{\ln(a_1)}{2\pi i}\;\;\;b_1=\frac{a_2}{2\pi i a_1}\;\;\;b_2=\frac{1}{2\pi i}\Big(\frac{a_3}{a_1}-\frac{a_2^2}{2a_1^2}\Big)... \)

If we substitute \( z\mapsto e^{2\pi i z} \) into this equation for z+theta then the 1-cyclic theta mapping is immediately obvious.
\( \frac{\ln(U(e^{2\pi i z}))}{2\pi i}=z+\theta(z);\;\;\;\theta(z)=\sum_{n=0}^{\infty}b_n e^{2n\pi i z} \)

This is an valid alternative way of viewing Kneser's Tetration construction where theta(z) is a 1-cyclic function which vanishes to a constant as \( \Im(z)\to\infty \), and has a singularity at integer values of z, but is otherwise analytic in the upper half of the complex plane.  So what my kneser.gp pari-gp program does iteratively calculate theta(z), which is computationally much much easier than calculating a Riemann mapping.

\( \text{Tet}(z)=\alpha^{-1}(z+\theta(z))\;\;\; \) Kneser's Tetration in terms of the complex valued inverse Abel superfunction and theta
\( \text{Tet}(z)=\alpha^{-1}(z+\theta(z))=\Psi^{-1}(\lambda^{z+\theta(z)})\;\;\; \) Kneser's Tetration in terms of the inverse Schröder and the 1-cyclic theta mapping.  
And here is one last complex plane graphing image, of Kneser's Tetration base e, from -3 to +12 on the real axis, and +/-3 on the imaginary axis.
   
- Sheldon
#10
James,

I had to edit my posts to remove any references to Kneser's \( \tau \) function.  I can get as far as Kneser's RiemannMapping region, which exactly matches Jay's post.  And, if you reread my edited posts,  I showed that you can get \( z+\theta(z) \) from the RiemannMapping \( U(z) \) region as follows:
\( z+\theta(z)=\frac{\ln(U(e^{2\pi i z}))}{2\pi i} \)
 
Then you can use the complex valued inverse Abel function to get Tetration as follows:
\( \text{Tet}(z)=\alpha^{-1}(z+\theta(z)) \)

But I don't understand Kneser's \( \tau(z) \) function which does not seem to be  \( z+\theta(z) \).   Kneser is using the RiemannMapping \( U(z) \) result in a different way than I am.  Also, Kneser finishes by constructing the real valued slog.... This thread is still good and the pictures are really cool, but I am discouraged that after all these years I still don't understand Kneser as much as I would like.  I'm sure that in time, I will understand more, or perhaps someone can step in and further enlighten me.

I think maybe I got it, but I will need to reread Henryk's post a few more times.  The only thing I can figure, that makes any sense at all is:
\( \text{Tet}^{-1}(z)=\text{slog}(z)=\tau(\alpha(z))=\tau\Big(\frac{\ln(\Psi(z))}{\lambda}\Big)\;\;\; \)Kneser's equation for the inverse of Tetration in terms of tau
\( \tau^{-1}(z)=\frac{\ln(U(e^{2\pi i z}))}{2\pi i}=z+\theta(z)\;\;\; \) This shows the inverse of tau in terms of my z+theta(z)
But then \( \tau \) is the end result of the inverse of the RiemannMapping, which totally I don't get from Henryk's post, and it still confuses me....

\( \tau \) can also be expressed as a different 1-cyclic mapping \( \tau=z+\theta_\alpha(z)\;\;\; \) I'm not sure this matter much though
- Sheldon


Possibly Related Threads…
Thread Author Replies Views Last Post
  Pictures of some generalized analytical continuations Caleb 18 4,383 03/17/2023, 12:56 AM
Last Post: tommy1729
  Infinite tetration fractal pictures bo198214 15 44,347 07/02/2010, 07:22 AM
Last Post: bo198214
  Pretty pictures. andydude 4 14,102 11/18/2009, 06:12 AM
Last Post: andydude



Users browsing this thread: 1 Guest(s)