# Tetration Forum

Full Version: Conjectures
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I have several open questions that I have evidence to believe, and yet have no proof. I'm lumping these together because they don't seem to go anywhere else. I've selected four such open questions that I think would be nice to know:
• Exponential factorial at 0
• Superlog derivative at 0
• Superlog derivative at 0 and E-tetra-I
• Superlog equilibrium

Exponential Factorial
A common way to define the exponential factorial is EF(0) = 0, and $EF(x) = x^{EF(x-1)}$, however, if you define EF(1) = 1, then the solution for EF(x) by assuming it is a C^n function (solving the system formed by differentiating repeatedly at x=1) gives $EF(0) \approx 0.577$.

My conjecture is that $EF(0) = \gamma$, the Euler-Mascheroni constant.

Super-logarithm Derivative
From my approximations, $\text{slog}_e'(0) \approx 0.916$, and ${}^{i}{e} \approx 0.786 + i 0.916$.

My first conjecture is that $\text{Im}({}^{i}{e}) = \text{slog}_e'(0)$, and my second conjecture is that $\text{slog}_e'(0) = 0.915965594177\cdots$ otherwise known as Catalan's constant.

Super-logarithm Equilibrium
My experiments with the super-logarithm have shown that there is a line in which the real part of the complex-valued superlog do not depend on the imaginary part of the input, in other words, my conjecture is that there exist functions f(b) and g(a, b) such that:

$\text{slog}_b(f(b) + i a) = \text{slog}_b(f(b)) + i g(a, b)$

Other Open Questions
• What is the boundary of period 2 (or period 3) behavior in b^x?
• Is there a recurrence equation for the coefficients for super-roots?
• Is there a Nelson-like continued fraction that makes a continuous super-log?

Is there an obvious answer to any of these? Do the values depend on which method one uses? Is there any way of knowing?

Andrew Robbins
Phew, lots of questions. But regarding the dependency from a specific slog, there surely is one regarding the slog derivative (which is quite clear) and regarding the equilibrium:

For example if we consider the just discussed regular slog at the lower real fixed point $a$
$\alpha_b(x)=\log_{\ln(a)}\left(\lim_{n\to\infty} \frac{\exp_b^{\circ n}(x)}{\ln(a)^n}\right)$, $\text{rslog}_b(x)=\alpha_b(x)-\alpha_b(1)=\log_{\ln(a)}\left(\lim_{n\to\infty}\frac{\exp_b^{\circ n}(x)}{\exp_b^{\circ n}(1)}\right)$
there is a dependency of the real part from the imaginary argument part.

For example for $b=\sqrt{2},a=2$:
$\text{rslog}_b(1)=0$ and $\text{rslog}_b(1+\frac{I}{2})=-0.2625038052+0.7542335672*I$
andydude Wrote:Super-logarithm Derivative
From my approximations, $\text{slog}_e'(0) \approx 0.916$, and ${}^{i}{e} \approx 0.786 + i 0.916$.

My first conjecture is that $\text{Im}({}^{i}{e}) = \text{slog}_e'(0)$, and my second conjecture is that $\text{slog}_e'(0) = 0.915965594177\cdots$ otherwise known as Catalan's constant.
In computing the solutions (for base e) to moderately large systems with your proposed slog, I think the first derivative is closer to 0.9159460564995..., which puts Catalan's constant out of the running (or puts your slog out of the running?).

Quote:Super-logarithm Equilibrium
My experiments with the super-logarithm have shown that there is a line in which the real part of the complex-valued superlog do not depend on the imaginary part of the input, in other words, my conjecture is that there exist functions f(b) and g(a, b) such that:

$\text{slog}_b(f(b) + i a) = \text{slog}_b(f(b)) + i g(a, b)$
Could you provide a bit more info on this one? Again using calculations based on your solution with base e, I've seen a line near z=0.5 (somewhere between 0.45 and 0.5), running almost a unit's length "up" and "down" in the imaginary direction, for which the slog has nearly fixed real part. But it's not quite exact, and as z approaches either primary fixed singularity, the slog eventually gets "sucked in". But maybe you had something else in mind?
Yes, that's the line I'm talking about, I think it only works for $b>\eta$ and yes, for base e, I think f(b) was pretty close to 0.5 if i remember correctly.

Andrew Robbins
Sorry, I just read the "more info" part. I developed two approximations to f(b) based on the numerical data of finding when slog(f(b) + I) = slog(f(b)), and they're basically the same except for accuracy. The less accurate one is:
$f(x) \approx \exp{(2-x)}$
and the more accurate approximation to f(b) is:
$f(x) \approx \exp{\left$2-x + 2(G(4) - G(x))\right$}$
where
$G(x) = \left(\frac{ex}{e-1} - 1\right)^{\frac{e-1}{e(x-1)+1}}$

Andrew Robbins
jaydfox Wrote:
andydude Wrote:Super-logarithm Derivative
From my approximations, $\text{slog}_e'(0) \approx 0.916$, and ${}^{i}{e} \approx 0.786 + i 0.916$.

My first conjecture is that $\text{Im}({}^{i}{e}) = \text{slog}_e'(0)$, and my second conjecture is that $\text{slog}_e'(0) = 0.915965594177\cdots$ otherwise known as Catalan's constant.
In computing the solutions (for base e) to moderately large systems with your proposed slog, I think the first derivative is closer to 0.9159460564995..., which puts Catalan's constant out of the running (or puts your slog out of the running?).
Following up on the $\Im({}^{i}{e}) = \mathrm{slog}_e'(0)$ conjecture, my initial estimate puts $\Im({}^{i}{e})$ closer to 0.9163, give or take. It would have been very cool if it had been true.

PS: Which looks better, $\text{Im}(z)$ or $\Im(z)$?
jaydfox Wrote:Following up on the $\Im({}^{i}{e}) = \mathrm{slog}_e'(0)$ conjecture, my initial estimate puts $\Im({}^{i}{e})$ closer to 0.9163, give or take. It would have been very cool if it had been true.

For reference, truncated to double precision (about 15 decimal places), ${}^{i}{e}$ is:

0.7856963885801098+0.916302621081289*I

This was calculated as follows:
1. I started with a 1200-term accelerated solution for Andrew's slog, base e, solved at 3548 bits of precision, then truncated to 2048 bits (of which about 1928 bits are valid, give or take).
2. I found the "residue" by subtracting the power series for $\log_{c_k}\left(z-{c_k}\right)+\log_{\overline{c_k}}\left(z-\overline{c_k}\right)$, where $c_k$ is the primary fixed point, 0.3181315+1.3372357*I.
3. I recentered the residue from z=0 to z=1 (using PARI, but can be done with a Pascal matrix as well).
4. I added (to the residue) the power series for $\log_{c_k}\left(z-{c_k}+1\right)+\log_{\overline{c_k}}\left(z-\overline{c_k}+1\right)$, thus recovering the slog, recentered with a minimum possible loss of precision.
5. I reverted the (just derived) power series of slog at z=1 to get the sexp at z=0. I only kept the first 600 terms for further evaluations. They become noticeably inaccurate after about 250 terms (i.e., they stopped converging on the power series of $\ln\left(\ln(x+3)\right)$), though the inaccuracy of each additional term is less than the additional precision provided, up to about 600 terms, give or take, depending on the distance from the origin at which you evaluate the power series. By this, I mean particularly that sexp(1) and sexp(-1) continue to converge on e and 0, respectively. Take that with a grain of salt. I wouldn't try to use more than about the first 250-400 terms thus derived for high-precision calculations, depending on your needs.
6. Finally, I evaluated the power series at i, giving an answer that is probably not accurate to more than 25 digits, though probably accurate to at least 15 digits, but I have no easy method of validating that claim at the moment.

Edit: For reference, I have attached a SAGE object, which represents the power series I derived as described above, truncated to 256 bits of precision (it's probably only good for about 80-100 bits anyway). Simply load the series into a variable, e.g., sexp, and then you can evaluate as simply as typing sexp(1) and pressing enter:

Code:
Flt256 = RealField(256) C256 = ComplexField(256) I = C256.0 sexp = load('sexp_600_pseries.sobj') sexp(-1) sexp(0) sexp(1)-Flt256(e) sexp(I)

As for results, you should get, respectively, 0, 1, 0, and the complex value that I described above.
Oh well, so much for numerical coincidences...

Andrew Robbins
I may have found a disproof of my exponential factorial conjecture. The exponential factorial satisfies: $EF(1) = 1$ and $EF(x) = x^{EF(x-1)}$, whereas the inverse function satisfies: $EF^{-1}(1) = 1$ and $EF^{-1}((EF^{-1}(x)+1)^{x}) = EF^{-1}(x)+1$, which is kind of an Abel-like functional equation, but not. Now if we plug x=0 into this equation we get $EF^{-1}(1) = 1 = EF^{-1}(0)+1$ which would indicate that $EF^{-1}(0) = 0$ which indicates that $EF(0) = 0$ thus disproving my conjecture.

Although this is very convincing, I'm still not convinced, since it assumes the function is invertible. I'm not sure, maybe this is proof enough. Another reason I don't think this proves it is that it assumes certain properties of $EF^{-1}(0)$ from the beginning.

If my conjecture is correct, then $EF(-1) = 0$ and $EF(0) = \gamma$, so the inverse would satisfy $EF^{-1}(0) = -1$ and $EF^{-1}(\gamma) = 0$. This would mean the above expression with x=0 would be: $\lim_{x\rightarrow 0} EF^{-1}((EF^{-1}(x)+1)^{x}) = EF^{-1}(\gamma) = 0 = EF^{-1}(0) + 1$ which is also true. Sadly I'm not sure which of these to believe, but if it is a matter of choice, I would chose the later, since it is so much more interesting.

Andrew Robbins