[split] Understanding Kneser Riemann method
#1
For the inspiration for this thread, see this thread

@sheldonison
(01/13/2016, 01:37 PM)sheldonison Wrote: \( \text{tet}_b(z) =\;^z b= \exp_b^z(0) \)
just as a point of order, I think you meant 1 instead of 0.

I've tried to understand your Pari/GP scripts, but I think my fundamental issue is with the Kneser construction / Riemann mapping thing. So here is my understanding so far. Whatever the Kneser construction is, it seems to produce results consistent with regular iteration. Regular iteration produces power series with complex coefficients for \( b > \eta \), because the fixed points are complex for these bases. That makes sense to me. Regular iteration produces power series with real coefficients for \( \exp(-e) \le b \le \eta \), because the fixed points are real for these bases, or for all bases in the closed set of the region of convergence of the infinitely iterated exponential. I would also expect that tetration for "period 3" bases (approximately all complex bases with negative real part) to produce power series with complex coefficients, not only because the fixed points are complex, but also because of homotopy considerations, or that the orbits of 3 points that converge to a 3 cycle would require pushing any "lines" into a "round thing" (not sure if that's rigorous, but it makes sense to me).

So in this context, the Riemann mapping step is a method to find a function that somehow maps power series with complex coefficients to power series with real coefficients. The value of such a construction is that it allows us to compare regular iteration and intuitive iteration for \( b > \eta \). But there are too many unknowns for me: what are the properties of this Riemann mapping? how do we find it? what is the result? is it analytic? wouldn't this just be equivalent to

\( f^{-1}(x) = \text{tet}_b^{Reg}(\text{slog}_b^{Int}(x)) \)

and if this is the method for calculating the Riemann mapping, then we can't expect to learn anything about the two methods of iteration. Perhaps I should revisit this when I'm less confused.
#2
(01/13/2016, 08:24 AM)andydude Wrote: @marraco, @tommy
That certainly is a cool equation, even if it is easily provable.

@everyone
Also, I think I can express my earlier comment in different words now. Tetration is defined as the 1-initialized superfunction of exponentials. The previous functions discussed earilier are 3-initialized and 5-initialized, which makes them, not tetration, by definition. However, if there is an analytic continuation of the 1-initialized superfunction that overlaps with the 3-initialized superfunction, AND if on the overlap f(0) = 3, then they can be considered branches of the same function. But until that is proven, I don't think it's accurate to say that they're all "tetration". They are, however, iterated exponentials in the sense that they extend \( \exp_b^n(3) \) to non-integer n. And so I would probably write these functions as \( \exp_b^n(1),\, \exp_b^n(3),\, \exp_b^n(5) \) instead of saying that \( {}^{n}b \) is a multivalued function that returns all three.

That makes sense, but on other side, we need to solve equations like \( \\[22pt]

{^x \left(\sqrt[2]{2} \right)=3} \) (3>2, the asymptotic limit).
Similarly, when we solve \( \\[14pt]

{e^{x}=-1} \) (-1<0, the asymptotic limit), we do not say that is is a function different than exponentiation. We just extend the domain to complex numbers.

but this equation \( \\[22pt]

{^x \left(\sqrt[2]{2} \right)=3} \) has real solutions, unless we consider the pair \( \\[16pt]

{(x,\,^0a)} \) a new kind of number.

If we use a new kind of number, then, for the main branch, \( \\[16pt]

{^0a=(1,1)} \); no more a real number.
I have the result, but I do not yet know how to get it.
#3
(01/13/2016, 04:01 PM)andydude Wrote: @sheldonison
(01/13/2016, 01:37 PM)sheldonison Wrote: \( \text{tet}_b(z) =\;^z b= \exp_b^z(0) \)
just as a point of order, I think you meant 1 instead of 0.

I've tried to understand your Pari/GP scripts, but I think my fundamental issue is with the Kneser construction / Riemann mapping thing. So here is my understanding so far.....

The value of such a construction is that it allows us to compare regular iteration and intuitive iteration for \( b > \eta \). But there are too many unknowns for me: what are the properties of this Riemann mapping? how do we find it? what is the result? is it analytic? wouldn't this just be equivalent to

\( f^{-1}(x) = \text{tet}_b^{Reg}(\text{slog}_b^{Int}(x)) \)

and if this is the method for calculating the Riemann mapping, then we can't expect to learn anything about the two methods of iteration. Perhaps I should revisit this when I'm less confused.

Your equation is pretty close. I'm going to re-phrase it in terms that I prefer using, using a theta(z) mapping. I'm not surprised about the confusion. It would be nice to try to encapsulate the Kneser mappings into something as compact as possible. I think the equation linking your f(x) with my theta equation is: \( f(z)=z+\theta(z) \)

Lets say we have the Schroeder function, and its inverse \( S(z)\;\;S^{-1}(z) \) which have corresponding Abel and super functions, \( \alpha(z)=\log_L(S(z))\;\;\alpha^{-1}(z)=S^{-1}(L^z)\;\; \)

I think that's what you mean by regular iteration. This Abel function is complex valued for bases>eta. Also, there's actually two fixed points, which are complex conjugates of each other.

Now, here's the interesting thing. Start with a real valued slog(z) function, that meets the uniqueness criteria. We can generate that slog as a function of the \( \alpha(z) \) above as follows:
\( \theta(z)=\text{slog}(\alpha^{-1}(z))-z\;\;\;\theta(z) \) is a 1-cyclic function, theta(z+1)=theta(z)
\( \text{slog}(z) = \alpha(z) + \theta(\alpha(z))\;\; \) real valued slog(z) in terms of the Schroeder function and theta(z)

I'm writing this equations in terms of the slog, since my latest program, fatou.gp calculates the slog. The uniqueness criteria, equivalent to Kneser, is that the upper complex plane theta(z) has a very special property, that as \( \Im(z) \) approaches +imag infinity, theta(z) approaches a constant. Since theta(z) is a 1-cyclic function, this tells you that:
\( \theta(z) = \sum_{n=0}^{\infty} a_n \cdot \exp(2n\pi i z)\;
\; \) notice the absence of negative terms as compared with the general 1-cyclic: \( \sum_{n=-\infty}^{\infty} a_n \cdot \exp(2n\pi i z)\;
\; \)

So, what my latest fatou.gp program does, is find a way to compute a pair of \( \theta(z) \) mappings for the two fixed points, in the upper and lower halves of the complex plane, in addition to iterating and calculating an approximation for the real valued slog(z) Taylor series. This is equivalent to Kneser's construction, although Kneser never talked about 1-cyclic functions much, but his equations and his Riemann mapping can be equivalently expressed in terms of 1-cyclic mappings, like I'm doing here.

So, now this tells you that as \( \alpha(z) \) approaches +Im infinity, Kneser's slog approaches \( \alpha(z)+a_0 \) where \( a_0 \) is the constant term from the \( \theta(z) \) equation. Of course, the \( \alpha(z) \) approaches +Im infinity as z gets closer to the fixed point of L. Perhaps I will post more later; hope this helps.
- Sheldon
#4
(01/13/2016, 05:36 PM)sheldonison Wrote: I'm writing this equations in terms of the slog, since my latest program, fatou.gp calculates the slog. The uniqueness criteria, equivalent to Kneser, is that the upper complex plane theta(z) has a very special property, that as \( \Im(z) \) approaches +imag infinity, theta(z) approaches a constant. Since theta(z) is a 1-cyclic function, this tells you that:
\( \theta(z) = \sum_{n=0}^{\infty} a_n \cdot \exp(2n\pi i z)\;
\; \) notice the absence of negative terms as compared with the general 1-cyclic: \( \sum_{n=-\infty}^{\infty} a_n \cdot \exp(2n\pi i z)\;
\; \)

Ok, so this looks a Fourier series with unknown coefficients. How do you compute the coefficients \( a_n \)? Maybe it's obvious, but I don't know much about Fourier series.
#5
(01/13/2016, 05:36 PM)sheldonison Wrote: Lets say we have the Schroeder function, and its inverse \( S(z)\;\;S^{-1}(z) \) which have corresponding Abel and super functions, \( \alpha(z)=\log_L(S(z))\;\;\alpha^{-1}(z)=S^{-1}(L_0+L^z)\;\; \) Notice that for b=e, \( \exp(z)\; L_0=L\;\; \) but this is not the case for other bases.

So
\( \alpha^{-1}(z)=S^{-1}(L_0+L^z) \)
\( S(\alpha^{-1}(z))=L_0+L^z \)
\( S(z)=L_0+L^{\alpha(z)} \)
\( S(z)=L_0+S(z) \)
\( S(z) - S(z) = L_0 = 0 \)
which means \( L_0 = L = 0 \)?
I don't understand.
#6
(01/13/2016, 01:37 PM)sheldonison Wrote: The two pari-gp programs agree with each other. And they both agree that when you rotate 180 degrees around eta, the function you get is no longer real valued at the real axis!

What exactly do you mean by "rotate"? Do you mean if you start with a base \( b > \eta \) and vary the base towards 1 that the Riemann mapping function turns the already real solution into a complex solution?
#7
(01/13/2016, 09:21 PM)andydude Wrote:
(01/13/2016, 05:36 PM)sheldonison Wrote: Lets say we have the Schroeder function, and its inverse \( S(z)\;\;S^{-1}(z) \) which have corresponding Abel and super functions, \( \alpha(z)=\log_L(S(z))\;\;\alpha^{-1}(z)=S^{-1}(L_0+L^z)\;\; \) Notice that for b=e, \( \exp(z)\; L_0=L\;\; \) but this is not the case for other bases.

So
\( \alpha^{-1}(z)=S^{-1}(L_0+L^z) \)
\( S(\alpha^{-1}(z))=L_0+L^z \)
\( S(z)=L_0+L^{\alpha(z)} \)
\( S(z)=L_0+S(z) \)
\( S(z) - S(z) = L_0 = 0 \)
which means \( L_0 = L = 0 \)?
I don't understand.
\( \alpha^{-1}(z)=S^{-1}(L^z) \)

That was a typo; working from the top of my head. Anyway, the inverse of the Schroeder function of L^z is the superfunction. For base-e, it will be around the fixed point, so \( S^{-1}(0)=L \). Anyway, the formal Schroeder equation is what you use, at the fixed point of the exponential for base=b.

- Sheldon
#8
(01/13/2016, 09:29 PM)andydude Wrote:
(01/13/2016, 01:37 PM)sheldonison Wrote: The two pari-gp programs agree with each other. And they both agree that when you rotate 180 degrees around eta, the function you get is no longer real valued at the real axis!

What exactly do you mean by "rotate"? Do you mean if you start with a base \( b > \eta \) and vary the base towards 1 that the Riemann mapping function turns the already real solution into a complex solution?

Start with \( b=\eta+0.2 \)
Then rotate slowly
\( b=\eta+0.2\cdot\exp(0.25 \pi i) ...\;\; b=\eta+0.2\cdot\exp(0.50 \pi i) ...\;\; \eta+0.2\cdot\exp(0.75 \pi i) ... \;\; \eta+0.2\cdot\exp(1.0 \pi i)=\eta-0.2\approx 1.245 \)

At each step along the way, generate an analytic Kneser type mapping using one of the fixed points in the upper half of the complex plane, and the other fixed point in the lower half of the complex plane. The resulting Kneser mapping is no longer real valued at the real axis. The imaginary offset is much more visible in the slog between the fixed points, which varies by about 2*10^-8. The sexp(z) imaginary component is around 10^-14.

update Better yet, look at the beautiful sequence of complex base tetration in my tetcomplex.gp post#4 and post#15, http://math.eretrandre.org/tetrationforu...hp?tid=729

ok, there is a natural desire to work with b=sqrt(2); So lets load up fatou.gp, and up the precision to 67 decimal digits. The two fixed points are 2 and 4.
Code:
\r fatou.gp
\p 67
sexpinit(sqrt(2))  /* the default limit of 50 iterations also limits precision to 10^-46 */
/* if desired, to get around the 50 iteration count, use loop(log(log(sqrt(2)))+1,70) */
slog(-2) /* these 4 points<2 are near the real axis, but Im(z) isn't exactly zero */
slog(0)
slog(0.5) /* the imaginary jitter is around 10^-48 */
slog(1.9)
slog(2.1) /* these 3 points beween 2 & 4 have Im(z)~=-8.57i, but its not exact */
slog(3)  /* the imaginary jitter is around 10^-25 */
slog(3.9)
slog(4.1) /* these 3 points>4 have Im(z)~=-18.2i, but its not exact either */
slog(6) /* the imaginary jitter is around 10^-50 */
slog(8)
- Sheldon


Possibly Related Threads…
Thread Author Replies Views Last Post
  Riemann surface of tetration Daniel 3 650 10/10/2023, 03:13 PM
Last Post: leon
  The ultimate beta method JmsNxn 8 2,086 04/15/2023, 02:36 AM
Last Post: JmsNxn
  [NT] Extending a Jacobi function using Riemann Surfaces JmsNxn 2 1,006 02/26/2023, 08:22 PM
Last Post: tommy1729
  Artificial Neural Networks vs. Kneser Ember Edison 5 1,585 02/22/2023, 08:52 PM
Last Post: tommy1729
  greedy method for tetration ? tommy1729 0 510 02/11/2023, 12:13 AM
Last Post: tommy1729
  tommy's "linear" summability method tommy1729 15 3,824 02/10/2023, 03:55 AM
Last Post: JmsNxn
  another infinite composition gaussian method clone tommy1729 2 941 01/24/2023, 12:53 AM
Last Post: tommy1729
  Understanding \(f(z,\theta) = e^{e^{i\theta}z} - 1\) JmsNxn 23 4,987 01/23/2023, 02:38 AM
Last Post: JmsNxn
  Semi-group iso , tommy's limit fix method and alternative limit for 2sinh method tommy1729 1 880 12/30/2022, 11:27 PM
Last Post: tommy1729
  [MSE] short review/implem. of Andy's method and a next step Gottfried 4 1,742 11/03/2022, 11:51 AM
Last Post: Gottfried



Users browsing this thread: 1 Guest(s)