# Tetration Forum

Full Version: Cyclic complex functions and uniqueness
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2
I had an idea about cyclic functions, as they relate to a function f such that f(x+a)-f(x) = a for some a. For the purposes of this discussion, I'll assume that a=1.

I'm going to quote a post from a couple months ago:
http://math.eretrandre.org/tetrationforu...php?tid=19
bo198214 Wrote:Let $F(x)=b^x$ for some base $b$.
Then we demand that any tetration $f(x)={}^x b$ is a solution of the Abel equation
$f(x+1)=F(f(x))$ and $f(1)=b$.

Such a solution $f$ (even if analytic and strictly increasing) is generally not unique because for example the solution $g(x):=f(x+\frac{1}{2\pi}\sin(2\pi x))$ is also an analytic strictly increasing solution, by
$g(x+1)=f(x+1+\frac{1}{2\pi}\sin(2\pi + 2\pi x))=F(f(x+\frac{1}{2\pi}\sin(2\pi x))=F(g(x))$ and
$g'(x)=f'(x+\frac{1}{2\pi}\sin(2\pi x))(1+\frac{1}{2\pi}\cos(2\pi x)2\pi)=\underbrace{f'(x+\frac{1}{2\pi}\sin(2\pi x))}_{>0}\underbrace{(1+\cos(2\pi x))}_{\ge 0}>0$

If we're only concerned about a real function, then this issue of a cyclic shift of the input to the sexp function is an important one.

However, what about for a complex function? The sin and cos functions grow exponentially (in absolute value) as we move away from the real line (growth is dictated by sinh and cosh, in fact). More importantly, the magnitude of the difference between the "crest" and "trough" is increasing exponentially.

And as far as I can remember, any cyclic real function can be decomposed into a Fourier series of sin and cos functions. Therefore, no matter how small the shift in each cycle, if we get far enough off the real line, then the shift will be quite large over a full cycle.

Okay, this much I think is correct, but please let me know if I'm wrong.

Pushing forward (assuming I was correct), if we had a general idea of how the function should act for inputs with large imaginary part, then it would be easy to weed out solutions that had even very small cyclic shifts.

For the slog base e, I have just such a general idea. In the vicinity of the primary fixed point, it should behave like a logarithm with a complex base. The upper fixed point is approached as the imaginary part of the slog goes to positive infinity. The lower fixed point is approach as the imaginary part of the slog goes to negative infinity.

Luckily, this general idea is so simple that I think it is sufficient to give us the uniqueness criterion we've been looking for! For example, I think that my solution (using my change of base from base eta) will give us a very erratic slog in the vicinity of the fixed points.

I have yet to test this hypothesis with my solution, but I hope to show this in the next couple weeks. It'll be tricky, because I'll need to calculate a lot of points with very high precision to be able to get a useful power series expansion. I need a power series expansion because my change of base formula only works for real tetrational "exponents".

Moreover, I think this uniqueness criterion will set apart Andrew's slog as "the" correct solution, at least for base e.
To illustrate, consider a sexp solution $S(z)$ and its inverse $S^{\small -1}(z)$. Let's say that, in the vicinity of the upper fixed point c_0, $S^{\small -1}(z) \approx \log_{c_0}\left(z-c_0\right)$.

Now let's say we have an alternate solution $R(z)$ and its inverse $R^{\small -1}(z)$. Let's say that $R(z)=S\left(z+\frac{\sin(2\pi z)}{200\pi}\right)$.

This shift is a pretty small one. After all:

$
\begin{eqnarray}
S^{\small -1}\left(S(z)\right) & = & z \\
\vspace{5} \\
D_z \left[S^{\small -1}\left(S(z)\right)\right] & = & D_z \left[z\right] \\
& = & 1 \\
\vspace{15} \\
S^{\small -1}\left(R(z)\right) & = & z+\frac{\sin(2\pi z)}{200\pi} \\
\vspace{5} \\
D_z \left[S^{\small -1}\left(R(z)\right)\right]
& = & D_z \left[z+\frac{\sin(2\pi z)}{200\pi}\right] \\
& = & 1 + \frac{\cos(2\pi z)}{100}
\end{eqnarray}
$

As you can see, for real z, the shift is small, as is the derivative. In fact, for real z, the derivative oscillates between 0.99 and 1.01.

But what happens to $S^{\small -1}\left(R(z)\right)$ as we approach z values with large imaginary part? Well, to get an idea, let's look at the derivative for a z value with a relatively small imaginary part.

$
\begin{eqnarray}
S^{\small -1}\left(R(2+2i)\right) & = & z+\frac{\sin\left(2\pi (2 + 2i)\right)}{200\pi} \\
& = & 2 + 228.19i \\
\vspace{5} \\
D_z \left[S^{\small -1}\left(R(2+2i)\right)\right]
& = & D_z \left[z+\frac{\sin\left(2\pi (2 + 2i)\right)}{200\pi}\right] \\
& = & 1434.76
\end{eqnarray}
$

In fact, here's a chart to give an idea of how big these oscillations can get:

Code:
|      z         |     S^-1(R(z))       |      D_z S^-1(R(z)) +----------------+----------------------+---------------------- | 2.00           |     2.00             |      1.01 | 2.25           |     2.25             |      1.00 | 2.50           |     2.50             |      0.99 | 2.75           |     2.75             |      1.00 | 3.00           |     3.00             |      1.01 | 3.25           |     3.25             |      1.00 | 3.50           |     3.50             |      0.99 | 3.75           |     3.75             |      1.00 | 4.00           |     4.00             |      1.01 |                |                      | | 2.00 + 1.00 i  |     2.00 +   0.43 i  |      3.68 | 2.25 + 1.00 i  |     2.68             |      1.00 -    2.68 i | 2.50 + 1.00 i  |     2.50 -   0.43 i  |     -1.68 | 2.75 + 1.00 i  |     2.32             |      1.00 +    2.68 i | 3.00 + 1.00 i  |     3.00 +   0.43 i  |      3.68 | 3.25 + 1.00 i  |     3.68             |      1.00 -    2.68 i | 3.50 + 1.00 i  |     3.50 -   0.43 i  |     -1.68 | 3.75 + 1.00 i  |     3.32             |      1.00 +    2.68 i | 4.50 + 1.00 i  |     4.00 +   0.43 i  |      3.68 |                |                      | | 2.00 + 2.00 i  |     2.00 + 228.19 i  |   1434.76 | 2.25 + 2.00 i  |   230.44             |      1.00 - 1433.76 i | 2.50 + 2.00 i  |     2.50 - 228.19 i  |  -1432.76 | 2.75 + 2.00 i  |  -225.44             |      1.00 + 1433.76 i | 3.00 + 2.00 i  |     3.00 + 228.19 i  |   1434.76 | 3.25 + 2.00 i  |   231.44             |      1.00 - 1433.76 i | 3.50 + 2.00 i  |     3.50 - 228.19 i  |  -1432.76 | 3.75 + 2.00 i  |  -224.44             |      1.00 + 1433.76 i | 4.50 + 2.00 i  |     4.00 + 228.19 i  |   1434.76
jaydfox Wrote:To illustrate, consider a sexp solution $S(z)$ and its inverse $S^{\small -1}(z)$. Let's say that, in the vicinity of the upper fixed point c_0, $S^{\small -1}(z) \approx \log_{c_0}\left(z-c_0\right)$.

Now let's say we have an alternate solution $R(z)$ and its inverse $R^{\small -1}(z)$. Let's say that $R(z)=S\left(z+\frac{\sin(2\pi z)}{200\pi}\right)$.

I forgot to get back to my point with the logarithm.

To simplify my calculations, I'm going to reverse the relationship of R and S, though I suspect the difference is nearly trivial:

$
\begin{eqnarray}
S^{\small -1}(z) & \approx & \log_{c_0}\left(z-c_0\right) & \text{for } \left\|z-c_0\right\| < \delta \\
\vspace{5} \\
S(z) & = & R\left(z+\frac{\sin(2\pi z)}{200\pi}\right) \\
\vspace{5} \\
R^{\small -1}\left(S(z)\right) & = & z+\frac{\sin(2\pi z)}{200\pi}
\end{eqnarray}
$

Bear in mind, the logarithmic approximation was in the immediate vicinity of the upper primary fixed point. Here, I've specified that z is within some arbitrarily small disk centered at the fixed point.

Then:
$
\begin{eqnarray}
R^{\small -1}(z)
& = & R^{\small -1}\left(S\left(S^{\small -1}(z)\right)\right) \\
& = & R^{\small -1}\left(S(w)\right) \text{ with } w = S^{\small -1}(z) \\
& = & w+\frac{\sin(2\pi w)}{200\pi} \\
& = & S^{\small -1}(z)+\frac{\sin\left(2\pi S^{\small -1}(z)\right)}{200\pi} \\
\vspace{5} \\
& \approx & \log_{c_0}\left(z-c_0\right) + \frac{\sin\left(2\pi \log_{c_0}\left(z-c_0\right)\right)}{200\pi}
\end{eqnarray}
$

Given that $\log_{c_0}\left(z-c_0\right)$ will have a large imaginary part near the fixed point, the sine of the logarithm will have the large oscillations I previously described.
jaydfox Wrote:...
Such a solution $f$ (even if analytic and strictly increasing) is generally not unique because for example the solution $g(x):=f(x+\frac{1}{2\pi}\sin(2\pi x))$ is also an analytic strictly increasing solution, by
$g(x+1)=f(x+1+\frac{1}{2\pi}\sin(2\pi + 2\pi x))=F(f(x+\frac{1}{2\pi}\sin(2\pi x))=F(g(x))$ and
$g'(x)=f'(x+\frac{1}{2\pi}\sin(2\pi x))(1+\frac{1}{2\pi}\cos(2\pi x)2\pi)=\underbrace{f'(x+\frac{1}{2\pi}\sin(2\pi x))}_{>0}\underbrace{(1+\cos(2\pi x))}_{\ge 0}>0$

If we're only concerned about a real function, then this issue of a cyclic shift of the input to the sexp function is an important one.

However, what about for a complex function? The sin and cos functions grow exponentially (in absolute value) as we move away from the real line (growth is dictated by sinh and cosh, in fact). More importantly, the magnitude of the difference between the "crest" and "trough" is increasing exponentially.
...

In the case of complex domain, the uniqueness of tetration $F$ seems to be provided by the axiom about the asymptotic behavior of $F(z)$ at $\Im(z) \rightarrow +\infty$:

$F( z )= L + {\mathcal O}\Big( \exp( L z ) \Big)$

where $L$ is eigenvalue of logarithm. See details (and pics) at
http://www.ils.uec.ac.jp/~dima/PAPERS/2008analuxp.pdf

However, there is continuum of other tetrations that grow in the direction of imaginary axix.

P.S. Henryk Trappmann had invited me here. I hope, his message
was not a trap.
Hi -
just read your very interesting article. However, I have one remark.
In Eq. 1.3) you define

$\hspace{24} \exp_a^z(t)= \exp_a(\exp_a^{^{z-1}}(t))$

But for the case z->inf we never have a starting condition.
Such a starting condition would be allowed by the alternative definition

$\hspace{24} \exp_a^z(t)=\exp_a^{^{z-1}}(\exp_a(t))$

So I think, it is better to define it this way.
Quote:P.S. Henryk Trappmann had invited me here. I hope, his message
was not a trap.
(I don't think, it was a trap, man )
Kouznetsov Wrote:In the case of complex domain, the uniqueness of tetration $F$ seems to be provided by the axiom about the asymptotic behavior of $F(z)$ at $\Im(z) \rightarrow +\infty$:

So that would imply that any holomorphic function periodic on the real axis goes to infinity on the imaginary axis. Is that true? (My knowledge about complex analysis is not that exhaustive.)
Not a trap. We really are interested in tetration. Also, I wanted to point out that I started a thread about your methods, so we can continue talking about it there, too.

Andrew Robbins

Moderator: Adapted the thread id to the changed thread id
bo198214 Wrote:
Kouznetsov Wrote:In the case of complex domain, the uniqueness of tetration $F$ seems to be provided by the axiom about the asymptotic behavior of $F(z)$ at $\Im(z) \rightarrow +\infty$:
So that would imply that any holomorphic function periodic on the real axis goes to infinity on the imaginary axis. Is that true? (My knowledge about complex analysis is not that exhaustive.)
Not so easy. The function can be doubly-periodic. However, the doubly-periodicity may be also good; if it breaks the postulate about the asymptotic behavior of tetration, then it can be used to prove the uniqueness of the solution.
You may expand the function into the Fourier series, and analyse the behavior at ${\rm i}\infty$. For many functions, the series diverge, which indicates singularities. Those, that converge, may show exponential growth and be "scracthed" in the sense that at the plot, in the interval of length of unity, there are so many equilines, that the plotter fails to plot them.
Tetration $F(z)$ is not entire function, because it has singularities and cut at $z\le -2$; in the paper mentioned, I write "indicates" instead of "gives the proof". In order to apply the expansion to the tetration, you may use the asymptotics $F(z)=L + {\mathcal O}\Big( \exp( L z ) \Big)$. You may try to get the next term in the expansion; it seems, that $F(z)=L + \exp( L z +Q) + {\mathcal O}\Big(\exp(2 L z ) \Big)$, where $Q\approx 1.10214+1.54047{\rm i}$. The truncated asymptotic is entire function.

Gottfried Wrote:Hi -
just read your very interesting article. However, I have one remark.
In Eq. 1.3) you define

(1)$\hspace{24} \exp_a^z(t)= \exp_a(\exp_a^{^{z-1}}(t))$

But for the case z->inf we never have a starting condition.
Such a starting condition would be allowed by the alternative definition

(2)$\hspace{24} \exp_a^z(t)=\exp_a^{^{z-1}}(\exp_a(t))$

So I think, it is better to define it this way.
Gottfried, may I postulate that $\exp^0(t)=t$ and $\exp^{-1}(t)=0$ ? In this case, will be any difference between definiitons (1) and (2)?

(P.S. Does the number of replies I should answer grow as the Ackermann function of time, or just exponentially?)
Kouznetsov Wrote:Gottfried, may I postulate that $\exp^0(t)=t$ and $\exp^{-1}(t)=0$ ? In this case, will be any difference between definiitons (1) and (2)?

Hmm, the difference should only appear for the limit-discussion. I don't know, how I should do a limit->infinity discussion, if the first value to be computed is already dependent on the limit-case itself.

The difference for the finite case, however, is only one of the notation, I think.

In the view of implementation 1) needs a stack-construct, which stacks up to the top, computes this, and then returns with updated values (for the infinite case the stack needed to be infinite and would never give partial evaluations) ; 2) could be implemented as a loop, beginning at the top-exponent x, and "appending bases" at each step. Partial evaluations include then the different(?) effects by different starting-values/top-exponents (and for the limit-case one could check, whether the partial evaluations would converge or diverge)
Pages: 1 2