Hi -
this concerns the remark of Henryk in the "Bummer"-thread.
Short review, paraphrasing Henryk:
According to D.Asimov in the 90ties the results of tetration, when expressed as powerseries-function, are dependent on the used fixpoint; for instance for base b=sqrt(2) the results for fractional iteration using the fixpoints t_1=2 and t_2=4 , where b=t^(1/t) , are different. This difference may have been overseen because it is small, of order of about 1e-24.
I assume this is a very important problem; and seems difficult to be shown by computation (at least using truncated powerseries of about few dozen terms)
Anyway, in the last days I tried to get a better grip of the problem - please share your thoughts, whether this may be a fruitful approach or whether it will likely be useless, leading nowhere...
-----------------------------------------------------------
I turn the view from determining fixpoint t from base b into that of determining base b from a selection of a fixpoint t, moreover, from a uniquely selected complex parameter u, then its expoential t and then to the base b.
This makes first
* select a complex u
* define t =exp(u)
* define b = exp(u/t) = exp(u/exp(u))
with the required consequence, that ...^b^b^b^t = b^b^b^t = b^b^t = b^t = t
Now we may also change our wording: not to say, t is a fixpoint of b, but "b is a N_eutral E_lement for t (=exp(u)) under operation of T_etration " (Use "N E T" for short).
In multiplication, for each parameter t there is one neutral element, b=1, which is the same for all t: ...*b*b*t = b*b*t = b*t = t (and b can be seen as b=t*1/t, where in tetration we have b=t^(1/t) - but, well, that's numerology so far, without obvious further use)
Also we know, that,
1) different from the operation of multiplication, each t has an individual NET b,
and also
2) different t may have the same b.
This requires more notation. Since
\( \hspace{24} b = exp(u/exp(u)) \)
we should use one index u, thus write b_u; and since moreover the exp(x)-function assumes the same value for different x with x_k = x + k*2 Pi i we should fully index the NET's as
\( \hspace{24}
u_k = u_0 + k*2 \pi i \\ \hspace{24}
{b_{ u_{k}}} = exp( \frac{u_0 + k*2 \pi i}{exp(u_0 + k*2 \pi i)} )
\)
or, since all t_k=exp(u_0 + k*2 Pi i) are equal, we may write
\( \hspace{24}
{b}_{u_{k}} = exp( \frac{u_0}{t}+ \frac{k}{t} *(2 \pi i) )
\)
One property is immediately obvious:
if t is rational (or, u_0 is the principal log of a rational), considering different k_j = {k_1,k_2,k_3,...}
* if k_j/t is integer, we have periodicity for different k_1,k_2,...
\( \hspace{24}
{b_{u_{k_{j}}}} = exp( \frac{u_0}{t}+ \frac{k_j}{t}*(2 \pi i) ) = exp(\frac{u_0}{t})
\)
resulting in the same NET for all u_k_j
Thus: while the map u->exp(u) gives a graph, which is periodic for u0 + k*2 Pi i , the map of u-> b = exp(u*exp(-u)) is not periodic, except if u is the log of a rational number t.
I've drawn such graphs for these mappings, and the latter (u->b) gives a more smooth image than that of t->b, which we see more often in tetration-discussion (however my graphing-capabilities are poor due to lack of a powerful commercial CAS)
Also, we are in a posiion, that for the tetrationfunction T_b°h(x), rewritten as
T_u_k°h(x) = exp(u_k/t *x) (for h=1)
we have the whole complex plane as domain for u_k, (since the denominator in exp(u_k/exp(u_k)) cannot be zero). Then we have as the domain for b the whole complex plane, punctured at the origin by the very initial functional definition (and don't need consider b=0 other than a limit, for instance).
I don't know, whether this change of focus does help in any way to proceed deeper into the inconsistency-problem; I've no convincing benefit of it so far.
---------------------------------------------------------------
Another current approach of mine to find any idea about this is to (re-) consider the powerseries of
\( \hspace{24} f_0(u) = exp(u*exp(-u)) \)
first. This is relatively simple, the first terms look like
\( \hspace{24} f_0(u) = 1 + u - 1/2*u^2 - 1/3*u^3 + 3/8*u^4 - 1/30*u^5 - 19/144*u^6 + 23/280*u^7 + O(u^ \)
The associated matrix-operator is an infinite square-array; the top left looks like
\( \hspace{24}
M_0 =
\begin{matrix} {rrrrr}
1 & 1 & 1 & 1 & 1 & 1 \\
0 & 1 & 2 & 3 & 4 & 5 \\
0 & -1/2 & 0 & 3/2 & 4 & 15/2 \\
0 & -1/3 & -5/3 & -3 & -10/3 & -5/3 \\
0 & 3/8 & 1/3 & -13/8 & -6 & -295/24 \\
0 & -1/30 & 61/60 & 29/10 & 101/30 & -5/6
\end{matrix}
\)
The entries in the 2'nd column are the coefficients of the powerseries for f_0(u).
Since f_0(0)=/=0, we cannot easily iterate that operator, or invert it to get a powerseries for the inverse function
\( \hspace{24} u = {f_0}^{o-1}(b) = x + y b + z b^2 + ... \)
But M0 is decomposable into two triangular matrices. Let the above operator be called M0, then
where P~ is the upper-triangular pascal-matrix and then M is triangular.
From the construction of M0 it is, in matrix-notation, formally:
(omitting considerations of convergence here).
But since P~ performs just a shift by one unit for the involved powerseries, we can write
M begins with
\( \hspace{24}
\begin{matrix} {rrrrr}
1 & . & . & . & . & . \\
0 & 1 & . & . & . & . \\
0 & -1/2 & 1 & . & . & . \\
0 & -1/3 & -1 & 1 & . & . \\
0 & 3/8 & -5/12 & -3/2 & 1 & . \\
0 & -1/30 & 13/12 & -1/4 & -2 & 1
\end{matrix}
\)
and the second column of M provides now the coefficient for the function
\( \hspace{24}
f(u) = u - 1/2*u^2 - 1/3*u^3 + 3/8*u^4 - 1/30*u^5 - 19/144*u^6 + 23/280*u^7 + O(u^
\)
Also, M is triangular and thus invertible, and the reciprocal provides the inverse operation for the powerseries.
Its reciprocal's top left edge looks like
\( \hspace{24}
\begin{matrix} {rrrrr}
1 & . & . & . & . & . \\
0 & 1 & . & . & . & . \\
0 & 1/2 & 1 & . & . & . \\
0 & 5/6 & 1 & 1 & . & . \\
0 & 13/12 & 23/12 & 3/2 & 1 & . \\
0 & 28/15 & 3 & 13/4 & 2 & 1
\end{matrix}
\)
giving the coefficients for the inverse function of f(u)=b-1 by
resulting in
\( \hspace{24}
g (x) := x + 1/2*x^2 + 5/6*x^3 + 13/12*x^4 + 28/15*x^5 + 187/60*x^6 + 1781/315*x^7 + O(x^
\)
and then
\( \hspace{24}
u = g(b-1)
\)
This powerseries allows principally to compute u0 from given b by evaluating g(b-1).
Since this is in principle a variant of the h()-function I tried to relate the coefficients to that of the h(), but did not succeed yet.
Also it may be of little use, since its radius of convergence allows the usual small range of 1/e^e < b < e^(1/e) only and also converges slowly if b deviates from 1.
(On the other hand, it contains my birthday at its 4'th term, so it's at least a function with a personal flair...)
---------------------
So, I've nothing breathtaking new here; it's just to try to find any useful path to the encirculation and possibly resolution of the "bummer"-problem.
Gottfried
this concerns the remark of Henryk in the "Bummer"-thread.
Short review, paraphrasing Henryk:
According to D.Asimov in the 90ties the results of tetration, when expressed as powerseries-function, are dependent on the used fixpoint; for instance for base b=sqrt(2) the results for fractional iteration using the fixpoints t_1=2 and t_2=4 , where b=t^(1/t) , are different. This difference may have been overseen because it is small, of order of about 1e-24.
I assume this is a very important problem; and seems difficult to be shown by computation (at least using truncated powerseries of about few dozen terms)
Anyway, in the last days I tried to get a better grip of the problem - please share your thoughts, whether this may be a fruitful approach or whether it will likely be useless, leading nowhere...
-----------------------------------------------------------
I turn the view from determining fixpoint t from base b into that of determining base b from a selection of a fixpoint t, moreover, from a uniquely selected complex parameter u, then its expoential t and then to the base b.
This makes first
* select a complex u
* define t =exp(u)
* define b = exp(u/t) = exp(u/exp(u))
with the required consequence, that ...^b^b^b^t = b^b^b^t = b^b^t = b^t = t
Now we may also change our wording: not to say, t is a fixpoint of b, but "b is a N_eutral E_lement for t (=exp(u)) under operation of T_etration " (Use "N E T" for short).
In multiplication, for each parameter t there is one neutral element, b=1, which is the same for all t: ...*b*b*t = b*b*t = b*t = t (and b can be seen as b=t*1/t, where in tetration we have b=t^(1/t) - but, well, that's numerology so far, without obvious further use)
Also we know, that,
1) different from the operation of multiplication, each t has an individual NET b,
and also
2) different t may have the same b.
This requires more notation. Since
\( \hspace{24} b = exp(u/exp(u)) \)
we should use one index u, thus write b_u; and since moreover the exp(x)-function assumes the same value for different x with x_k = x + k*2 Pi i we should fully index the NET's as
\( \hspace{24}
u_k = u_0 + k*2 \pi i \\ \hspace{24}
{b_{ u_{k}}} = exp( \frac{u_0 + k*2 \pi i}{exp(u_0 + k*2 \pi i)} )
\)
or, since all t_k=exp(u_0 + k*2 Pi i) are equal, we may write
\( \hspace{24}
{b}_{u_{k}} = exp( \frac{u_0}{t}+ \frac{k}{t} *(2 \pi i) )
\)
One property is immediately obvious:
if t is rational (or, u_0 is the principal log of a rational), considering different k_j = {k_1,k_2,k_3,...}
* if k_j/t is integer, we have periodicity for different k_1,k_2,...
\( \hspace{24}
{b_{u_{k_{j}}}} = exp( \frac{u_0}{t}+ \frac{k_j}{t}*(2 \pi i) ) = exp(\frac{u_0}{t})
\)
resulting in the same NET for all u_k_j
Thus: while the map u->exp(u) gives a graph, which is periodic for u0 + k*2 Pi i , the map of u-> b = exp(u*exp(-u)) is not periodic, except if u is the log of a rational number t.
I've drawn such graphs for these mappings, and the latter (u->b) gives a more smooth image than that of t->b, which we see more often in tetration-discussion (however my graphing-capabilities are poor due to lack of a powerful commercial CAS)
Also, we are in a posiion, that for the tetrationfunction T_b°h(x), rewritten as
T_u_k°h(x) = exp(u_k/t *x) (for h=1)
we have the whole complex plane as domain for u_k, (since the denominator in exp(u_k/exp(u_k)) cannot be zero). Then we have as the domain for b the whole complex plane, punctured at the origin by the very initial functional definition (and don't need consider b=0 other than a limit, for instance).
I don't know, whether this change of focus does help in any way to proceed deeper into the inconsistency-problem; I've no convincing benefit of it so far.
---------------------------------------------------------------
Another current approach of mine to find any idea about this is to (re-) consider the powerseries of
\( \hspace{24} f_0(u) = exp(u*exp(-u)) \)
first. This is relatively simple, the first terms look like
\( \hspace{24} f_0(u) = 1 + u - 1/2*u^2 - 1/3*u^3 + 3/8*u^4 - 1/30*u^5 - 19/144*u^6 + 23/280*u^7 + O(u^ \)
The associated matrix-operator is an infinite square-array; the top left looks like
\( \hspace{24}
M_0 =
\begin{matrix} {rrrrr}
1 & 1 & 1 & 1 & 1 & 1 \\
0 & 1 & 2 & 3 & 4 & 5 \\
0 & -1/2 & 0 & 3/2 & 4 & 15/2 \\
0 & -1/3 & -5/3 & -3 & -10/3 & -5/3 \\
0 & 3/8 & 1/3 & -13/8 & -6 & -295/24 \\
0 & -1/30 & 61/60 & 29/10 & 101/30 & -5/6
\end{matrix}
\)
The entries in the 2'nd column are the coefficients of the powerseries for f_0(u).
Since f_0(0)=/=0, we cannot easily iterate that operator, or invert it to get a powerseries for the inverse function
\( \hspace{24} u = {f_0}^{o-1}(b) = x + y b + z b^2 + ... \)
But M0 is decomposable into two triangular matrices. Let the above operator be called M0, then
Code:
.
M0 = M * P~
From the construction of M0 it is, in matrix-notation, formally:
Code:
.
V(u)~ * M0 = V(b)~
But since P~ performs just a shift by one unit for the involved powerseries, we can write
Code:
.
V(u)~ * M0 = V(b)~
V(u)~ * M * P~ = V(b)~
V(u)~ * M = V(b)~ * P^-1 ~
V(u)~ * M = V(b-1)~
M begins with
\( \hspace{24}
\begin{matrix} {rrrrr}
1 & . & . & . & . & . \\
0 & 1 & . & . & . & . \\
0 & -1/2 & 1 & . & . & . \\
0 & -1/3 & -1 & 1 & . & . \\
0 & 3/8 & -5/12 & -3/2 & 1 & . \\
0 & -1/30 & 13/12 & -1/4 & -2 & 1
\end{matrix}
\)
and the second column of M provides now the coefficient for the function
\( \hspace{24}
f(u) = u - 1/2*u^2 - 1/3*u^3 + 3/8*u^4 - 1/30*u^5 - 19/144*u^6 + 23/280*u^7 + O(u^
\)
Also, M is triangular and thus invertible, and the reciprocal provides the inverse operation for the powerseries.
Its reciprocal's top left edge looks like
\( \hspace{24}
\begin{matrix} {rrrrr}
1 & . & . & . & . & . \\
0 & 1 & . & . & . & . \\
0 & 1/2 & 1 & . & . & . \\
0 & 5/6 & 1 & 1 & . & . \\
0 & 13/12 & 23/12 & 3/2 & 1 & . \\
0 & 28/15 & 3 & 13/4 & 2 & 1
\end{matrix}
\)
giving the coefficients for the inverse function of f(u)=b-1 by
Code:
.
V(u) ~ * M = V(b-1)~
V(u) ~ = V(b-1)~ * M^-1
resulting in
\( \hspace{24}
g (x) := x + 1/2*x^2 + 5/6*x^3 + 13/12*x^4 + 28/15*x^5 + 187/60*x^6 + 1781/315*x^7 + O(x^
\)
and then
\( \hspace{24}
u = g(b-1)
\)
This powerseries allows principally to compute u0 from given b by evaluating g(b-1).
Since this is in principle a variant of the h()-function I tried to relate the coefficients to that of the h(), but did not succeed yet.
Also it may be of little use, since its radius of convergence allows the usual small range of 1/e^e < b < e^(1/e) only and also converges slowly if b deviates from 1.
(On the other hand, it contains my birthday at its 4'th term, so it's at least a function with a personal flair...)
---------------------
So, I've nothing breathtaking new here; it's just to try to find any useful path to the encirculation and possibly resolution of the "bummer"-problem.
Gottfried
Gottfried Helms, Kassel