Exact and Unique solution for base e^(1/e)
#1
For the rest of this post, assume \( x\ =\ e^{\frac {\tiny 1}{e}} \).

This is mainly a theoretical solution. It'll be slow to converge without using helper functions to speed up certain processes. I'll cover some of the helper functions I've come up with, but in a later post.

For an arbitrary large integer n, we can find the tetration, calling it z:
\( \begin{eqnarray}
z
& = & ^{\small n} x \ \\ \\[10pt]

\\
& = & e-{\normalsize \epsilon},\text{ where }{\normalsize \epsilon}\ >\ 0 \\
& = & e(1-{\normalsize \delta}),\ \text{where}\ {\normalsize \delta}\ =\ \frac{\epsilon}{e}\ >\ 0 \end{eqnarray} \)

It will be much easier to understand the following if we factor out the e.

Let's find the first logarithm. Remember the formula for logarithms in bases other than e:

\( \begin{eqnarray}
log_x(z)
& = & \frac{ln(z)}{ln(x)} \\ \\[10pt]

\\
& = & \frac{ln(z)}{\frac{\tiny 1}{e}} \\ \\[10pt]

\\
& = & e\ \times\ ln(z)\end{eqnarray} \)

Now, assuming z = e(1-d), let's see what we know about its logarithm and its second iterated logarithm:

\( \begin{eqnarray}
log_x(z)
& = & e\ \times\ ln\left(e(1\ -\ {\normalsize \delta})\right) \\
& = & e \left(1+ln(1\ -\ {\normalsize \delta})\right) \\
& = & e \left(1\ -\ {\normalsize \delta} -\ \frac{\delta^2}{2}\ -\ \frac{\delta^3}{3}\ -\ \dots\right)\ \text{using the series expansion of ln(1+t)}\end{eqnarray} \)

\(
\begin{eqnarray}
log_x\left(log_x(z)\right)
& = & e\ \times\ ln\left(e(1\ -\ {\normalsize \delta}\ -\ \frac{\delta^2}{2}\ -\ \frac{\delta^3}{3}\ -\ \dots)\right) \\
& = & e\left(1+ln(1\ -\ {\normalsize \delta}\ -\ \frac{\delta^2}{2}\ -\ \frac{\delta^3}{3}\ -\ \dots)\right) \\
& = & e\left(1\ -\ {\normalsize \delta}\ -\ 2\frac{\delta^2}{2}\ -\ 3.5\frac{\delta^3}{3}\ -\ \dots\right)\end{eqnarray}
\)

Note that as n grows arbitrarily large, we see that on the interval y in [n-2,n], x^^y is must be almost linear, because we have three points that are almost colinear:
\( \begin{eqnarray}
^{\small n-2} x & = & e\left(1\ -\ {\normalsize \delta}\ -\ {\Large (2)}\frac{\delta^2}{2}\ -\ {\Large (3.5)}\frac{\delta^3}{3}\ -\ \dots\right)\\
^{\small n-1} x & = & e\left(1\ -\ {\normalsize \delta}\ -\ {\Large (1)}\frac{\delta^2}{2}\ -\ {\Large (1)}\frac{\delta^3}{3}\ -\ \dots\right)\\
^{\small n-0} x & = & e\left(1\ -\ {\normalsize \delta}\ -\ {\Large (0)}\frac{\delta^2}{2}\ -\ {\Large (0)}\frac{\delta^3}{3}\ -\ \dots\right)
\end{eqnarray}
\)

If you can't tell at a glance that the interpolating function must be linear in the limit as epsilon goes to 0, let me know. However, I assume that at this point, I've made my case. The interpolating function is linear, and since we're using tetration to integer hyperpowers to find the endpoints of our interval, we can solve exactly. We then use iterative logarithms (integer iteration count, again, so we can solve exactly).

It's important that everybody can agree that this solution is both exact and unique. The uniqueness of this solution can then be used to solve other bases, with some hope that those solutions will be unique as well. In other words, other solutions might meet the iterated exponential property and be infinitely differentiable, but they would be off by some cyclic factor in the tetrational exponent.

Given that I only have a bachelor's degree in computer science, with no formal graduate math study, I can only assume this exact solution must be known.

However, I haven't found it in my cursory reading of what's available on the internet, so apparently this solution, assuming it's already known, is buried in the literature. Perhaps it's considered too technical for us mere mortals. Anyway, can someone please point me to where this has been independently derived? Thanks.

(There is, of course, a chance that my solution is not unique, which is to say, that I've overlooked some small detail. But I really don't see that I could have missed anything, unless my series expansion for the second logarithm is wrong. However, I've triple-checked it.)

For background reference to how I derived this solution, see my sci.math.reference posts here:
http://groups.google.com/group/sci.math....dcdd1fe33d
#2
I can not follow you in this post.
Also your linear construction on sci.math.research is not completely clear to me.
Also if you say unique then you should specify under what conditions.

Specifically I dont know what you mean with the function "frac" in your post in sci.math.research.

Let me gather what I got so far. If we have the conditions
\( {}^1 b = b \) and \( {}^{x+1}b=b^{{}^x b} \) (1)

then to define \( g(x)={}^{x}b \) it suffices to define an initial function f on the interval \( (0,1] \) and
then derive the value on an interval \( (n,n+1\] \) via \( g(x)=\exp_b^{\circ n}(f(x-n)) \) (where \( \exp_b^{\circ n} \) is the n-times iterated \( \exp_b(x)=b^x \)).
For intervals \( (-n,-n+1] \) left of \( (0,1] \) one has similarly \( g(x)=\log_b^{\circ n}(f(x+n)) \).
If \( g(x) \) is continuous at 1 then
\( g(1)=b^{g(0)} \), \( g(1)=b^{b^{g(-1)}} \) and \( g(1)=b^{b^{b^{g(-2)}}} \) from which \( {}^0b=1 \), \( {}^{-1}b=0 \), \( {}^{-2}b=-\infty \) follows.

So it makes sense to define g on \( (-2,\infty) \).

Then somehow you wanted to define the initial function \( f : (0,1]\to(1,b] \) to be linear and from there stopped my understanding.
Especially I didnt see the problem you mentioned with \( b=e^e,10,1.5 \).

Edit: Now it seems to me that \( \text{frac}(x)=1+(b-1)*x \) is the initial linear function that maps \( (0,1]\to (1,b] \). Ok, but if I graph the resulting function g(x), I see angles at x=0 and x=1, for both \( b=e \) and \( b=e^{1/e} \).
#3
bo198214 Wrote:Also your linear construction on sci.math.research is not completely clear to me.
...

Specifically I dont know what you mean with the function "frac" in your post in sci.math.research.
Frac(x) just means the fractional portion of x, defined by frac(x)=x-floor(x). Floor(x) means the greatest integer less than x. So frac(3.76)=0.76., and frac(-0.63) = 0.37.

Quote:Let me gather what I got so far. If we have the conditions
\( {}^1 b = b \) and \( {}^{x+1}b=b^{{}^x b} \) (1)

then to define \( g(x)={}^{x}b \) it suffices to define an initial function f on the interval \( (0,1] \) and
then derive the value on an interval \( (n,n+1\] \) via \( g(x)=\exp_b^{\circ n}(f(x-n)) \) (where \( \exp_b^{\circ n} \) is the n-times iterated \( \exp_b(x)=b^x \)).
For intervals \( (-n,-n+1] \) left of \( (0,1] \) one has similarly \( g(x)=\log_b^{\circ n}(f(x+n)) \).
If \( g(x) \) is continuous at 1 then
\( g(1)=b^{g(0)} \), \( g(1)=b^{b^{g(-1)}} \) and \( g(1)=b^{b^{b^{g(-2)}}} \) from which \( {}^0b=1 \), \( {}^{-1}b=0 \), \( {}^{-2}b=-\infty \) follows.

So it makes sense to define g on \( (-2,\infty) \).
Correct so far.

Quote:Then somehow you wanted to define the initial function \( f : (0,1]\to(1,b] \) to be linear and from there stopped my understanding.
Especially I didnt see the problem you mentioned with \( b=e^e,10,1.5 \).
Actually, in most implementations I've seen that use linear interpolation, the initial function is typically defined as linear on (-1, 0], with endpoints of 0 and 1, respectively. Then you perform a logarithm to get (-2, -1], and iterated exponents to get intervals (0, 1], (1, 2], etc.

Quote:Edit: Now it seems to me that \( \text{frac}(x)=1+(b-1)*x \) is the initial linear function that maps \( (0,1]\to (1,b] \). Ok, but if I graph the resulting function g(x), I see angles at x=0 and x=1, for both \( b=e \) and \( b=e^{1/e} \).

Not sure I follow. Run the numbers again, with the frac(x) function I described above, and see if the angles you describe go away. But remember, for base e, you'll be using (-1, 0] as the interval, so linear interpolation will go through points like <-0.9, 0.1>, <-0.35, 0.65>, etc. For base 2, you'll be using an interval of (-0.5156131..., 0.4843868...], as shown in my sci.math.research post. (Note: I left out the minus sign on -0.5156..., but I hope it's obvious that it should be there).

On that interval, you'll be interpolating between log_2(e) and log_2(log_2(e)) as your y values, so your function should get the following points:

<-0.515613137, 0.528766373>
<-0.4, 0.634428533>
<-0.2, 0.817214266>
<0.3, 1.2741786>
<0.484386863, 1.442695041>

I'll probably be away until early next week. When I return, I plan to go back in and provide numerical results to demonstrate some of the formulae I used. I'll also provide graphs where appropriate.

By the way, in the seventh post in that sci.math.research thread, I describe my first attempt at an exact solution. It was a glorious failure, which is to say, it's infinitely differentiable, satisfies the iterated exponential property, and it's extremely "wrong". I posted a graph where you can see that the second derivative is very lumpy, whereas I would hope it to be far more smooth. In fact, the second derivative of the logarithm of the first derivative (the yellow curve in the graph) shows just how wavy the function can get.

I bring this up, because that solution is now attached to the discussion I started in the first few posts of that thread, which I still consider very insightful. I don't want to get the two ideas mixed up, since the one seems valid, the other quite "wrong", as I'm willing to admit.
#4
Ok, I now understand your definition.
You take as initial function \( f : (-1,0)\to(0,1) \) the linear function \( f(x)=x+1 \) which just maps -1 to 0 and 0 to 1.
Hence the piecewise defined resulting function g defined by
\( g(x)=
\begin{cases}
f(x) &\text{for} &x\in (-1,0]\\
\exp_b(g(x-1)) &\text{for} &x>0\\
\log_b(f(x+1)) &\text{for} &x\in(-2,-1]
\end{cases}
\)
is continuous. And if we look at the first derivation at 0 from the left:
\( g'(0)=(x+1)'|_{x=0}=1 \) and from the right:
\( g'(+0)=\exp_b(f(x-1))'|_{x=0}=\exp_b'(0)=\exp_b(0)\log(b)=\log(b) \)

then we realize that both derivations are equal exactly for \( b=e \). By the recurrence that applies to all other joins of the piecewise defined function too and hence g is (once) continuously differentiable for \( b=e \) (but not several times continuously differentiable).

Ok, step by step we continue. Now we want to have g also differentiable for other bases b. It looks as if you change the initial interval for this purpose (though keeping the initial function f linear) in a way that provides for differentiability of g.
#5
bo198214 Wrote:Ok, step by step we continue. Now we want to have g also differentiable for other bases b. It looks as if you change the initial interval for this purpose (though keeping the initial function f linear) in a way that provides for differentiability of g.
It came to me while I was traveling (flew from Dallas to Fresno). For a base b > eta, there's a very good reason that the "critical interval" (v-2, v-1] should be defined such that \( ^v {\Large b} = e \):

\(
\begin{eqnarray}
g(v-1) & = & {\Large b}^{g(v-2)} \\
g'(v-1) & = & ln(b) \left({\Large b}^{g(v-2)}\right) g'(v-2) \\
g'(v-1) & = & ln(b) \left( g(v-1) \right) g'(v-2)\end{eqnarray}
\)

Notice that when \( g(v-1)\ =\ log_b(e),\text{ then }g'(v-1)\ =\ ln(b) \left(log_b(e)\right) g'(v-2) = 1 \times g'(v-2) \), no matter what the funciton g(x) might look like. This is why I consider this particular unit interval the "critical" interval. Somewhere in that interval, there must be an inflection point, unless the interval is truly linear, which can't happen for b > eta. This means that the first derivative has a local minimum in this interval, and intuitively, we can claim this to be a global minimum for the first derivative. This in turn implies that the second derivative has a zero in this interval. It should also be clear that this will be the only zero for the second derivative.

Now, for bases that are integer superroots of e, the critical interval will have well-defined endpoints. For example, 1.601075..., which is third superroot of e, the endpoints are 1 and 2, exactly. However, for base 2, for example, we know the value of the function at v-2 and v-1, but we don't know what v is. It might be 1.484386863..., but it could be 1.47 or 1.50. Without further investigation, it would only be a guess at this point to make a claim one way or the other.
#6
Actually, I have an unpublished solution to base-\( e^{1/e} \) tetration as well, and I was saving it for publication, but I suppose I wouldn't mind explaining it here. Let \( DE_b(x) = b^x - 1 \), (I call them decremented exponentials). Then look at the iterates of DE with the -1 first, instead of last as it is usually written:
  • \( DE_b^{[2]}(-x) = -1 + b^{-1 + b^{-x}} = -1 + b^{-1}\left(b^{1/b}\right)^x \)
  • \( DE_b^{[3]}(-x) = -1 + b^{-1 + b^{-1 + b^{x}}} = -1 + b^{-1}\left(b^{1/b}\right)^{\left(b^{1/b}\right)^x} \)
  • \( DE_b^{[4]}(-x) = -1 + b^{-1 + b^{-1 + b^{-1 + b^{-x}}}} = -1 + b^{-1}\left(b^{1/b}\right)^{\left(b^{1/b}\right)^{\left(b^{1/b}\right)^x}} \)

And let \( \exp_b^{[n]}(x) = b \uparrow (\cdots \uparrow (b \uparrow x)) \) represent iterated exponentials. We can then represent the relationship between these two functions as a simple linear equation:

Theorem
\(
\exp_{(b^{1/b})}^{[n]}(x) = b(1 + DE_b^{[n+1]}(-x))
\)

Proof
We already have several base cases above; the rest is proved by induction.
First, assume that \( DE_b^{[n]}(-x) = -1 + b^{-1} \exp_{(b^{1/b})}^{[n-1]}(x) \). It then follows that:

\(
\begin{array}{rl}
DE_b^{[n+1]}(-x)
& = DE_b[-1 + b^{-1} \exp_{(b^{1/b})}^{[n-1]}(x)] \\
& = -1 + b^{[-1 + b^{-1} \exp_{(b^{1/b})}^{[n-1]}(x)]} \\
& = -1 + b^{-1} b^{[b^{-1} \exp_{(b^{1/b})}^{[n-1]}(x)]} \\
& = -1 + b^{-1} (b^{1/b})^{[\exp_{(b^{1/b})}^{[n-1]}(x)]} \\
& = -1 + b^{-1} {\exp_{(b^{1/b})}^{[n]}(x)}
\end{array}
\)
which proves our assumption, rearranging gives the above.

Conclusion

Now that that's proven, anything thats true about one function should be true about the other function, since the relationship is linear. However, Trappmann said that \( e^x - 1 \) has a continuous iterate that fails to converge for non-integers. This could be disastrous for tetration (another reason I didn't want to post this). But it also means that since the function \( y = b^{1/b} \) is unique for b=e, it means that base-\( e^{1/e} \) tetration is uniquely defined, even if its series doesn't converge.

Andrew Robbins
#7
andydude Wrote:Now that that's proven, anything thats true about one function should be true about the other function, since the relationship is linear. However, Trappmann said that \( e^x - 1 \) has a continuous iterate that fails to converge for non-integers. This could be disastrous for tetration (another reason I didn't want to post this). But it also means that since the function \( y = b^{1/b} \) is unique for b=e, it means that base-\( e^{1/e} \) tetration is uniquely defined, even if its series doesn't converge.

Andrew Robbins

Hey, can you point me to where Trappmann said that \( e^x - 1 \) fails to converge for non-integers? (Or did you mean non-integer x?) In my own limited experimentation, it looks like it should converge quite nicely, at least on a small enough radius and limiting ourselves to the reals (haven't tried complex numbers yet). Integer iteration counts should extend the function out to the rest of the reals.

But I'm only looking at the first 15 or so terms of the series. It seems pretty well behaved and very well-defined, but maybe I'm missing something?

Edit: Never mind, I found it here:
http://math.eretrandre.org/tetrationforu...d=28#pid28

I'll take a look at his reference, if I can get a hold of it.
#8
Quote:But I'm only looking at the first 15 or so terms of the series. It seems pretty well behaved and very well-defined, but maybe I'm missing something?

What exactly did you try?
Did you experiment with the iterations of \( e^x-1 \)? *wondering*

My assertion was that for \( f(x)=e^x-1 \) the convergence radius of the unique formal series \( f^{\circ t}(x) \), (for example given by the double binomial formula) is 0 for every non-integer \( t \). For natural numbers \( t \) it is clear that the radius of convergence is infinity. And for negative numbers \( t=-n \), note that \( f^{-1}(x)=\log(x+1) \) is defined on \( (-1,\infty) \) and has convergence radius 1 when developed at 0, the convergence radius is \( -f^{\circ n-1}(-1) \).
#9
No, actually, I think the claim is that \( DE_b^{[t]}(x) \) fails to converge for non-integer t. I'm not convinced of this, but apparently Jabotinsky uses L-functions and L-sequences to prove this. I need to read the references too. By the way I've found both of Jabotinsky's papers on L-stuff on JSTOR. So jaydfox, if you live near a University with JSTOR access, you might want to pay them a visit. Smile

Andrew Robbins
#10
andydude Wrote:So jaydfox, if you live near a University with JSTOR access, you might want to pay them a visit. Smile

Or go to the plain old physical library ...


Possibly Related Threads…
Thread Author Replies Views Last Post
  [2sinh] exp(x) - exp( - (e-1) x), Low Base Constant (LBC) 1.5056377.. tommy1729 3 993 04/30/2023, 01:22 AM
Last Post: tommy1729
  Maybe the solution at z=0 to f(f(z))=-z+z^2 Leo.W 9 1,683 01/24/2023, 12:37 AM
Last Post: tommy1729
  Base -1 marraco 15 23,466 07/06/2022, 09:37 AM
Last Post: Catullus
  I thought I'd take a crack at base = 1/2 JmsNxn 9 4,751 06/20/2022, 08:28 AM
Last Post: Catullus
  On the [tex]2 \pi i[/tex]-periodic solution to tetration, base e JmsNxn 0 1,332 09/28/2021, 05:44 AM
Last Post: JmsNxn
  A different approach to the base-change method JmsNxn 0 1,801 03/17/2021, 11:15 PM
Last Post: JmsNxn
  Complex Tetration, to base exp(1/e) Ember Edison 7 14,753 08/14/2019, 09:15 AM
Last Post: sheldonison
  b^b^x with base 0<b<e^-e have three real fixpoints Gottfried 1 6,722 11/07/2017, 11:06 AM
Last Post: sheldonison
Question Analytic matrices and the base units Xorter 2 7,239 07/19/2017, 10:34 AM
Last Post: Xorter
  Base units Xorter 0 3,701 01/22/2017, 10:29 PM
Last Post: Xorter



Users browsing this thread: 1 Guest(s)