the inconsistency depending on fixpoint-selection
#1
Hi -

this concerns the remark of Henryk in the "Bummer"-thread.

Short review, paraphrasing Henryk:
According to D.Asimov in the 90ties the results of tetration, when expressed as powerseries-function, are dependent on the used fixpoint; for instance for base b=sqrt(2) the results for fractional iteration using the fixpoints t_1=2 and t_2=4 , where b=t^(1/t) , are different. This difference may have been overseen because it is small, of order of about 1e-24.


I assume this is a very important problem; and seems difficult to be shown by computation (at least using truncated powerseries of about few dozen terms)

Anyway, in the last days I tried to get a better grip of the problem - please share your thoughts, whether this may be a fruitful approach or whether it will likely be useless, leading nowhere...

-----------------------------------------------------------

I turn the view from determining fixpoint t from base b into that of determining base b from a selection of a fixpoint t, moreover, from a uniquely selected complex parameter u, then its expoential t and then to the base b.

This makes first
* select a complex u
* define t =exp(u)
* define b = exp(u/t) = exp(u/exp(u))

with the required consequence, that ...^b^b^b^t = b^b^b^t = b^b^t = b^t = t

Now we may also change our wording: not to say, t is a fixpoint of b, but "b is a N_eutral E_lement for t (=exp(u)) under operation of T_etration " (Use "N E T" for short).

In multiplication, for each parameter t there is one neutral element, b=1, which is the same for all t: ...*b*b*t = b*b*t = b*t = t (and b can be seen as b=t*1/t, where in tetration we have b=t^(1/t) - but, well, that's numerology so far, without obvious further use)

Also we know, that,
1) different from the operation of multiplication, each t has an individual NET b,

and also
2) different t may have the same b.

This requires more notation. Since

\( \hspace{24} b = exp(u/exp(u)) \)
we should use one index u, thus write b_u; and since moreover the exp(x)-function assumes the same value for different x with x_k = x + k*2 Pi i we should fully index the NET's as

\( \hspace{24}
u_k = u_0 + k*2 \pi i \\ \hspace{24}
{b_{ u_{k}}} = exp( \frac{u_0 + k*2 \pi i}{exp(u_0 + k*2 \pi i)} )
\)

or, since all t_k=exp(u_0 + k*2 Pi i) are equal, we may write

\( \hspace{24}
{b}_{u_{k}} = exp( \frac{u_0}{t}+ \frac{k}{t} *(2 \pi i) )
\)

One property is immediately obvious:
if t is rational (or, u_0 is the principal log of a rational), considering different k_j = {k_1,k_2,k_3,...}

* if k_j/t is integer, we have periodicity for different k_1,k_2,...

\( \hspace{24}
{b_{u_{k_{j}}}} = exp( \frac{u_0}{t}+ \frac{k_j}{t}*(2 \pi i) ) = exp(\frac{u_0}{t})
\)
resulting in the same NET for all u_k_j

Thus: while the map u->exp(u) gives a graph, which is periodic for u0 + k*2 Pi i , the map of u-> b = exp(u*exp(-u)) is not periodic, except if u is the log of a rational number t.

I've drawn such graphs for these mappings, and the latter (u->b) gives a more smooth image than that of t->b, which we see more often in tetration-discussion (however my graphing-capabilities are poor due to lack of a powerful commercial CAS)

Also, we are in a posiion, that for the tetrationfunction T_b°h(x), rewritten as

T_u_k°h(x) = exp(u_k/t *x) (for h=1)

we have the whole complex plane as domain for u_k, (since the denominator in exp(u_k/exp(u_k)) cannot be zero). Then we have as the domain for b the whole complex plane, punctured at the origin by the very initial functional definition (and don't need consider b=0 other than a limit, for instance).

I don't know, whether this change of focus does help in any way to proceed deeper into the inconsistency-problem; I've no convincing benefit of it so far.

---------------------------------------------------------------

Another current approach of mine to find any idea about this is to (re-) consider the powerseries of

\( \hspace{24} f_0(u) = exp(u*exp(-u)) \)

first. This is relatively simple, the first terms look like

\( \hspace{24} f_0(u) = 1 + u - 1/2*u^2 - 1/3*u^3 + 3/8*u^4 - 1/30*u^5 - 19/144*u^6 + 23/280*u^7 + O(u^Cool \)

The associated matrix-operator is an infinite square-array; the top left looks like

\( \hspace{24}
M_0 =
\begin{matrix} {rrrrr}
1 & 1 & 1 & 1 & 1 & 1 \\
0 & 1 & 2 & 3 & 4 & 5 \\
0 & -1/2 & 0 & 3/2 & 4 & 15/2 \\
0 & -1/3 & -5/3 & -3 & -10/3 & -5/3 \\
0 & 3/8 & 1/3 & -13/8 & -6 & -295/24 \\
0 & -1/30 & 61/60 & 29/10 & 101/30 & -5/6
\end{matrix}
\)

The entries in the 2'nd column are the coefficients of the powerseries for f_0(u).

Since f_0(0)=/=0, we cannot easily iterate that operator, or invert it to get a powerseries for the inverse function
\( \hspace{24} u = {f_0}^{o-1}(b) = x + y b + z b^2 + ... \)

But M0 is decomposable into two triangular matrices. Let the above operator be called M0, then
Code:
.
     M0 = M * P~
where P~ is the upper-triangular pascal-matrix and then M is triangular.

From the construction of M0 it is, in matrix-notation, formally:
Code:
.
     V(u)~ * M0 = V(b)~
(omitting considerations of convergence here).

But since P~ performs just a shift by one unit for the involved powerseries, we can write
Code:
.
      V(u)~ * M0 = V(b)~
      V(u)~ * M * P~  = V(b)~
      V(u)~ * M  = V(b)~ * P^-1 ~
      V(u)~ * M  = V(b-1)~

M begins with

\( \hspace{24}
\begin{matrix} {rrrrr}
1 & . & . & . & . & . \\
0 & 1 & . & . & . & . \\
0 & -1/2 & 1 & . & . & . \\
0 & -1/3 & -1 & 1 & . & . \\
0 & 3/8 & -5/12 & -3/2 & 1 & . \\
0 & -1/30 & 13/12 & -1/4 & -2 & 1
\end{matrix}
\)

and the second column of M provides now the coefficient for the function

\( \hspace{24}
f(u) = u - 1/2*u^2 - 1/3*u^3 + 3/8*u^4 - 1/30*u^5 - 19/144*u^6 + 23/280*u^7 + O(u^Cool
\)

Also, M is triangular and thus invertible, and the reciprocal provides the inverse operation for the powerseries.

Its reciprocal's top left edge looks like
\( \hspace{24}
\begin{matrix} {rrrrr}
1 & . & . & . & . & . \\
0 & 1 & . & . & . & . \\
0 & 1/2 & 1 & . & . & . \\
0 & 5/6 & 1 & 1 & . & . \\
0 & 13/12 & 23/12 & 3/2 & 1 & . \\
0 & 28/15 & 3 & 13/4 & 2 & 1
\end{matrix}
\)

giving the coefficients for the inverse function of f(u)=b-1 by
Code:
.
  V(u) ~ * M = V(b-1)~
  V(u) ~     = V(b-1)~ * M^-1

resulting in

\( \hspace{24}
g (x) := x + 1/2*x^2 + 5/6*x^3 + 13/12*x^4 + 28/15*x^5 + 187/60*x^6 + 1781/315*x^7 + O(x^Cool
\)
and then
\( \hspace{24}
u = g(b-1)
\)

This powerseries allows principally to compute u0 from given b by evaluating g(b-1).
Since this is in principle a variant of the h()-function I tried to relate the coefficients to that of the h(), but did not succeed yet.

Also it may be of little use, since its radius of convergence allows the usual small range of 1/e^e < b < e^(1/e) only and also converges slowly if b deviates from 1.
(On the other hand, it contains my birthday at its 4'th term, so it's at least a function with a personal flair...Big Grin)

---------------------

So, I've nothing breathtaking new here; it's just to try to find any useful path to the encirculation and possibly resolution of the "bummer"-problem.

Gottfried
Gottfried Helms, Kassel
#2
Well, rethinking a bit (and a involving bit of latte makkiato Smile ) I feel, I should add one thought: why this all...

The speculation behind this is, to extract a functional relation between two u, say u and v, or more precise u_k and v_j, having the same NET b, and to consider the eigensystem-based formula for the powerseries-expansion of tetration. Is it -under this functional relation- *necessary*, that the powerseries, constructed by u_k and v_j give different results? Or can it be shown, that they must give the same result?

Maybe, maybe it is even likely, this is of no help. I think, a very basic reconsideration is needed for the focused problem.

Gottfried
Gottfried Helms, Kassel
#3
Well ... latte makkiato ist immer gut! Wink

I think that the two real h's (of the t's) corresponding to the same b = sqrt(2), this time only with the + sign (sorry, Bo, ... it's the age), i.e. h = 2 and h = 4 must be different indeed. Any ... serious serial development should indeed show this situation. We cannot think that we may start from b = sqrt(2) and than, ... bingo! ... we suddely have two different values. The "strange object" that we may call h or t is the result of the application of a "two-valued function". But, two-valued "functions" are not politically correct animals.

But, perhaps, I don't understand what you precisely said. I didn't sleep well, last night. I shall improve! Tomorrow is another day.

GFR
#4
GFR Wrote:I think that the two real h's (of the t's) corresponding to the same b = sqrt(2), this time only with the + sign (sorry, Bo, ... it's the age), i.e. h = 2 and h = 4 must be different indeed. Any ... serious serial development should indeed show this situation. We cannot think that we may start from b = sqrt(2) and than, ... bingo! ... we suddely have two different values. The "strange object" that we may call h or t is the result of the application of a "two-valued function". But, two-valued "functions" are not politically correct animals.

But, perhaps, I don't understand what you precisely said. I didn't sleep well, last night. I shall improve! Tomorrow is another day.

GFR
Hi Gianfranco -

I've begun a short consideration of the multivalued log(1+x)-function in the context of matrix-operations (extended version of ContinuousIteration, which may of interest.
It seems interesting, but it exhibits, that we also need a more general notion of divergent summation, especially for the complex case. Since Euler-Summation (although principally able) is not well suited (and apparently not much studied) for the complex case, I'm always at the edge of possibilities. At least I cannot proceed much more without finding a reliable base for such summation-concepts. It seems, that all (or nearly all) fractional iterations of tetration, (if based on powerseries) produce hypergeometric divergent powerseries with convergence radius zero, not only the x->exp(x)-1 version. These series cannot be Euler-summed (principally) and thus we need this concept of assigning valued to such divergent series.
The powerseries for log(1+x) = x - x^2/2 + x^3/3 - + ... may be configured for multivaluedness by log_k(1+x) = k*2 Pi i + x - x^2/2 + x^3/3 - + ... and the matrix-operator is then square and has all the nasty properties of divergent series.

Disclaimer: all this is merely more or less speculation, and only motivated to find any usble entry-point to access the problem, which I focus in this thread.

Well - have a good night, I'll stop soon, too; I've to be prepared for an exam of my statistics-class tomorrow. I'll need hawk's eyes... Smile

Kind regards -

Gottfried
Gottfried Helms, Kassel
#5
Dear Gottfried, concerning:
Gottfried Wrote:ContinuousIteration, ... may of interest.
.......
merely more or less speculation, and only motivated to find any usble entry-point to access the problem, which I focus in this thread.

I read your interesting report and I have to confess that I didn't assimilate all its contents. I measured, in doing so, the achievement of my ... perfect incompetence level, according to the Peter's Principle. Wink I shall read it, calmly, again !

In particular, I read with a lot of interest the list of important questions that still need to be answered, among which: "Why tetration oscillates for b < e^(-e)?". As a matter of fact, I think that it also oscillates when e^(-e) < b < 1, before reaching a constant real asymptotic value, depending on b, when x -> oo.

Why that ? Perhaps, because y = b[4]x is complex, for b < 1 ? But, we need a demonstration for saying that. Maybe your matrix approach will help. Please find attached some considerations on serial developments, based on information found in the Web. It may be useful. Sad

But, if y = b[4]x oscillates at b < 1, what will be the meaning of the superlog, in that domain? Think of a simple "wild" graphical inversion or of a more mathematically correct non-inversiblity of it. Have we to consider that, perhaps, the slog "cannot be defined" for b < 1? This would have immediate consequences in the definition of "pentation" in that base area.

GFR

NB: This is my second posting, slightly modified. The last one was lost. Error at my side? Obscured, because badly formulated ? Wink

By the way, "latte*makkiato", in general, is not equal to "kaffé*makkiato", but there is a fixpoint. Something like "kapputschino". Amazing !


Attached Files
.pdf   Notes on Serial Developments.pdf (Size: 36.71 KB / Downloads: 773)
#6
Just a short note -

the EIE-consideration on page 19 are meaningless in this context. I was just editing and missed to delete that part, when I uploaded the pdf-file. (It was part of the NET-thoughts of my introductory post, in an early state) Please reload the corrected version.

Gottfried
Gottfried Helms, Kassel
#7
With some more analytical matrix-operations and numerical checks I now tend to the conclusion, that Asimov's proposal may be proven to be false.

I checked the occuring matrices in more depth and found very convincing numerical results, that -at least for the case b=sqrt(2), t0=2 and t1=4- the results are equal. For integer heights this is obvious from the scalar expression only (sqrt(2)^2 = 2, sqrt(2)^4=4 and so on), the problem occurs with fractional heights. I checked this now for h=1/2, using the powerseries, which I get using eingensystem-decomposition/diagonalization.

I used a better routine to compute the eigen-matrices up to dimension 160x160 and some transformations get those different eigenmatrices summable to the same values. Although this is only in certain numerical approximation the involved transformations are simple binomial-transformations, which then may be analytically derived as well. So my challenge is now to put some effort to perform those analytical derivations since the possibility of success is backed better.



Let b=sqrt(2), t0 = 2, t1 = 4, h the height using h=1/2

Then, in my usual notation we should get

V(x)~ * Bb^0.5 =V(y)~

where y = T_b°0.5(x)

With the fixpoints, using the h()-function with indexes
t0 = h_0(b)=2 t1 = h_1(b)=4

this is equivalent to the two different fixpoint-based matrix expressions (using PInv for P^-1)
Code:
´
V(x)~ * Bb^0.5 = V(x)~ * dV(1/t0)*PInv ~ * U_t0^0.5 * P~ * dV(t0)
V(x)~ * Bb^0.5 = V(x)~ * dV(1/t1)*PInv ~ * U_t1^0.5 * P~ * dV(t1)
so we request:

Code:
´
  dV(1/t0)*PInv ~ * U_t0^0.5 * P~ * dV(t0)
= dV(1/t1)*PInv ~ * U_t1^0.5 * P~ * dV(t1)


Rearrange dV(1/t0):
Code:
´
  PInv~ * U_t0^0.5 * P~
= dV(t0/t1)*PInv ~ * U_t1^0.5 * P~ * dV(t1/t0)

Now set t1/t0=a and the part dV(1/a)*PInv ~ in the rhs can be expanded according to
Code:
´
dV(1/a)*P^-1 ~ = (dV(1/a) P^-1~ dV(a))*dV(1/a)
            = (dV(a) P^-1 dV(1/a)) ~ * dV(1/a)
            = P^-a ~ * dV(1/a)

and we get, with a= 2, also writing U2 for U_t0 and U4 for U_t1
Code:
´
  P^-1 ~ * U2^0.5 * P~
= P^-2 ~ * dV(1/2) * U4^0.5 * dV(2)* P^2 ~


and this gives then, arranging P^-2 and P^2 to the left:
Code:
´
     P ~   * U2^0.5 * PInv ~
= dV(1/2)  * U4^0.5 * dV(2)


We see, that -unfortunately- we still have infinite sums in the lhs, namely all rows of P~ are infinite as well as the columns of U2^0.5. So we have either to determine these sums analytically (for what I've no solution currently) or employ accelerating methods, like Euler-transform/summation. It occurs, that these row/column-products are not well Euler-accelerable; the Euler-sum does not converge well. The second multiplication, U2^0.5 * PInv~, however provides a simple result: since only the second column of the result is interesting, only the top left 2x2-segment of PInv~ is of relevance, and this simply subtracts the first [1,0,0,0,...] column of U2^0.5 from its second column - which is happily trivial.



Applying Euler-summation anyway we get for the first few items of the lhs and rhs identity within a certain range of accuracy, see end of msg.
Some more tests gave even better accuracy with an additional powerseries in x, which makes -for abs(x)<1- the resulting powerseries convergent using the first 160 terms:
Code:
´
V(x)~*    P ~   * U2^0.5 * PInv ~
=V(x)~* dV(1/2)  * U4^0.5 * dV(2)


where the lhs can be rewritten using the binomial-theorem and makes
Code:
´
       V(x+1)~   * U2^0.5 * PInv ~
=V(x)~* dV(1/2)  * U4^0.5 * dV(2)

where the implicte infinite series in the matrix-product P~*U2^0.5 are now removed.
Tests with various x, which make the matrix-multiplication convergent, should then give the same results for the lhs and rhs. However, using various different x does not prove the identity, but makes it more likely.
(see last example, where x was set x=-1/2)



The matrix U2^0.5 was constructed from the analytic eigen-decomposition (160x160):
Let u0=log(t0)=log(2), u1 = log(t1)=log(4)
Code:
´  
  U2     = dV(u0) * S2                    // S2 is the factorially similarity-scaled matrix if Stirling-numbers 2'nd kind
         = W2 * D2 * W2^-1
  U2^0.5 = W2 * D2^0.5 * W2^-1


where W2 is the matrix of eigenvectors and D2 the matrix of eigenvalues of the matrix U2=dV(u0)*S2. Since the matrix U2 is triangular, their eigenvalues can be taken from the diagonal (and are thus identical to the entries in dV(u0)) and their eigenvector-matrices are assumed to be triangular, too. Using my analytical description for the eigenmatrices we get exact terms for any dimension. The same method was applied to U4 (based on the second fixpoint):
Code:
´  
  U4     = dV(u1) * S2
         = W4 * D4 * W4^-1
  U4^0.5 = W4 * D4^0.5 * W4^-1

=======================================================================

Documents:
-------------------------------------------------------------
Always: rows 0..10 , 149..159, columns 0..3; col 1 is of interest in U2^0.5 and U4^0.5
Code:
´
W2:
  1.0000000          .          .          .
          0  1.0000000          .          .
          0  1.1294457  1.0000000          .
          0  1.1985847  2.2588914  1.0000000
          0  1.2474591  3.6728170  3.3883370
          0  1.2856301  5.2023909  7.4226968
          0  1.3170719  6.8257401  13.305570
          0  1.3439053  8.5286133  21.207245
          0  1.3673703  10.300960  31.276282
          0  1.3882575  12.135263  43.645475
          0  1.4071054  14.025656  58.435556
...
          0  1.9669663  450.73244  48243.326
          0  1.9685780  454.52192  49020.007
          0  1.9701804  458.31761  49803.860
          0  1.9717734  462.11948  50594.901
          0  1.9733573  465.92749  51393.148
          0  1.9749320  469.74163  52198.619
          0  1.9764978  473.56185  53011.331
          0  1.9780547  477.38813  53831.301
          0  1.9796029  481.22044  54658.547
          0  1.9811424  485.05875  55493.086
          0  1.9826733  488.90302  56334.934
...

W2^-1:
  1.0000000              .              .              .
          0      1.0000000              .              .
          0     -1.1294457      1.0000000              .
          0      1.3527103     -2.2588914      1.0000000
          0     -1.6826504      3.9810682     -3.3883370
          0      2.1512781     -6.4209265      7.8850737
          0     -2.8091004      9.9333059     -15.655603
          0      3.7304380     -15.029982      28.522828
          0     -5.0228111      22.457753     -49.302115
          0      6.8411612     -33.311773      82.314637
          0     -9.4087785      49.202176     -134.16796
...
          0   1.2260250E25  -1.6721839E26   1.5866458E27
          0  -1.8513579E25   2.5287005E26  -2.4033047E27
          0   2.7957655E25  -3.8240626E26   3.6403727E27
          0  -4.2221159E25   5.7831789E26  -5.5143067E27
          0   6.3764412E25  -8.7462552E26   8.3530285E27
          0  -9.6304175E25   1.3227914E27  -1.2653332E28
          0   1.4545551E26  -2.0006638E27   1.9167862E28
          0  -2.1970166E26   3.0260093E27  -2.9036895E28
          0   3.3185955E26  -4.5769860E27   4.3988012E28
          0  -5.0129450E26   6.9231218E27  -6.6638632E28
          0   7.5726685E26  -1.0472183E28   1.0095442E29
...

U2^0.5:
  1.0000000                  .                 .                 .
          0         0.83255461                 .                 .
          0         0.15745312        0.69314718                 .
          0        0.010090238        0.26217664        0.57708288
          0     -0.00017858491       0.041592834        0.32741456
          0     0.000087842056      0.0028801157       0.082902856
          0   -0.0000021818250     0.00019184203       0.011468414
          0   -0.0000070205122    0.000020425106      0.0010469505
          0    0.0000016647900   -0.000010572403    0.000090362152
          0   0.00000060587940  0.00000048584930  -0.0000059493912
          0  -0.00000023525463   0.0000013999264  -0.0000016127504
...
          0      2.3828881E-14     5.4299315E-14     8.9934143E-14
          0      2.0689227E-14     4.8670874E-14     8.2750369E-14
          0      1.7351973E-14     4.2487147E-14     7.4493739E-14
          0      1.3883561E-14     3.5884110E-14     6.5367962E-14
          0      1.0347935E-14     2.8995152E-14     5.5577029E-14
          0      6.8056776E-15     2.1949028E-14     4.5321681E-14
          0      3.3132833E-15     1.4868077E-14     3.4796250E-14
          0     -7.7428773E-17     7.8667258E-15     2.4185926E-14
          0     -3.3197744E-15     1.0502594E-15     1.3664433E-14
          0     -6.3725353E-15    -5.4861413E-15     3.3921124E-15
          0     -9.2001722E-15    -1.1658107E-14    -6.4855900E-15
Comment: the order of 1E-14 is reached already in the ~ 30'th row and seems to decrease extremely slowly from then.


------------------------------------------------------------
W4:
  1.0000000              .              .              .
          0      1.0000000              .              .
          0     -1.7943497      1.0000000              .
          0      3.3934259     -3.5886994      1.0000000
          0     -6.5397995      10.006543     -5.3830492
          0      12.722863     -25.257585      19.839351
          0     -24.890972      60.430440     -61.930607
          0      48.877930     -139.82513      175.90008
          0     -96.234662      316.19924     -469.95850
          0      189.84909     -703.21881      1202.8562
          0     -375.10397      1544.2179     -2982.3442
...
          0   1.9370531E44  -9.7178394E45   2.5558655E47
          0  -3.8712771E44   1.9538433E46  -5.1699354E47
          0   7.7369377E44  -3.9282074E46   1.0456809E48
          0  -1.5462725E45   7.8973968E46  -2.1148554E48
          0   3.0903312E45  -1.5876640E47   4.2769087E48
          0  -6.1762668E45   3.1916737E47  -8.6486345E48
          0   1.2343806E46  -6.4159924E47   1.7487744E49
          0  -2.4670281E46   1.2897183E48  -3.5358114E49
          0   4.9306147E46  -2.5924578E48   7.1484829E49
          0  -9.8543949E46   5.2109217E48  -1.4451354E50
          0   1.9695217E47  -1.0473784E49   2.9212818E50
...

W4^-1:
  1.0000000             .             .             .
          0     1.0000000             .             .
          0     1.7943497     1.0000000             .
          0     3.0459559     3.5886994     1.0000000
          0     4.9810929     9.3116028     5.3830492
          0     7.9195802     20.893206     18.796941
          0     12.310657     42.992653     53.513592
          0     18.780115     83.386686     134.64033
          0     28.192642     154.79615     311.28394
          0     41.734039     277.67324     676.14910
          0     61.019154     484.41060     1399.2604
...
          0  3.2233680E13  7.0896715E18  4.1258282E22
          0  3.6756663E13  8.5329610E18  5.1552857E22
          0  4.1895929E13  1.0264451E19  6.4375659E22
          0  4.7732997E13  1.2340592E19  8.0337875E22
          0  5.4359849E13  1.4828683E19  1.0019614E23
          0  6.1880239E13  1.7808913E19  1.2488663E23
          0  7.0411146E13  2.1376785E19  1.5556698E23
          0  8.0084418E13  2.5645989E19  1.9366789E23
          0  9.1048598E13  3.0751791E19  2.4095655E23
          0  1.0347098E14  3.6855062E19  2.9961454E23
          0  1.1753991E14  4.4147026E19  3.7233358E23
...

U4^0.5
  1.0000000                .                 .               .
          0        1.1774100                 .               .
          0       0.37481156         1.3862944               .
          0      0.040296534        0.88261376       1.6322369
          0    0.00092111549        0.23537479       1.5587974
          0    0.00044447921       0.032376274      0.66380933
          0   -0.00023342988      0.0033609687      0.16318455
          0    0.00010396861    -0.00014225795     0.027006196
          0  -0.000025291882     0.00010651332    0.0026823939
          0  -0.000014529123  0.00000038544170   0.00028006948
          0   0.000027464012   -0.000045026171  0.000053888787
...
          0     1.1073259E11     -1.3152838E11    1.1289422E11
          0    -1.6910057E11      2.1323008E11   -1.9169761E11
          0     2.3128241E11     -3.2059300E11    3.0724298E11
          0    -2.5297584E11      4.2671391E11   -4.5420508E11
          0     1.0978170E11     -4.3584528E11    5.8615974E11
          0     5.0346351E11      8.7346252E10   -5.4905202E11
          0    -2.2791585E12      1.2476824E12   -6.5225955E10
          0     6.7065899E12     -4.9809105E12    2.2265937E12
          0    -1.6860759E13      1.4124712E13   -8.0902843E12
          0     3.8878344E13     -3.4859132E13    2.2220558E13
          0    -8.4639528E13      7.9448940E13   -5.3923149E13
...
------------------------------------------------------------------------

dV(1/2)*U4^0.5*dV(2)
  1.0000000                   .                   .                 .
          0           1.1774100                   .                 .
          0          0.18740578           1.3862944                 .
          0         0.010074133          0.44130688         1.6322369
          0       0.00011513944         0.058843697        0.77939872
          0      0.000027779951        0.0040470343        0.16595233
          0    -0.0000072946837       0.00021006054       0.020398069
          0     0.0000016245095    -0.0000044455608      0.0016878873
          0   -0.00000019759283     0.0000016642707    0.000083824810
          0  -0.000000056754387  0.0000000030112633   0.0000043760856
          0   0.000000053640648   -0.00000017588348  0.00000042100615
...
          0      -6.6807008E-16       1.4964761E-15    -2.4746068E-15
          0       5.6873578E-16      -1.3041345E-15     2.1989300E-15
          0      -4.6685677E-16       1.1041230E-15    -1.9077276E-15
          0       3.6391421E-16      -8.9954404E-16     1.6058326E-15
          0      -2.6129500E-16       6.9333235E-16    -1.2978634E-15
          0       1.6028132E-16      -4.8822996E-16     9.8817980E-16
          0      -6.2042494E-17       2.8676590E-16    -6.8084596E-16
          0      -3.2370731E-17      -9.1240918E-17     3.7960165E-16
          0       1.2202987E-16      -9.6283351E-17    -8.7840394E-17
          0      -2.0613040E-16       2.7399096E-16    -1.9140555E-16
          0       2.8399169E-16      -4.4031439E-16     4.5547407E-16
...
------------------------------------------------------------------------

Comparision of first 8 terms, produced by the different fixpoints-matrices.

With Euler-summation of different orders for the non-converging vector-products in P~ * (U2^0.5*P^-1 ~) I get for the first eight terms

Code:
´  
  Euler-sum            |       Compare
P~ * U2^0.5 * P^-1 ~   |   dV(1/2)*U4^0.5 *dV(2)
-----------------------+--------------------------
      3.1082947E-14    |      .
      1.1774100        |     1.1774100
      0.18740578       |     0.18740578
      0.010074195      |     0.010074133
      0.00011681686    |     0.00011513944
      0.000027854237   |     0.000027779951
     -0.0000073207548  |    -0.0000072946837
      0.0000016098065  |     0.0000016245095
     -0.00000016722781 |    -0.00000019759283

Here are partial sums if the matrices are used as coefficients for a powerseries in x
Here is x=-1/2 for the two versions from U2 and U4. The approximations are very good and both results seem to be equal
(only the 2'nd columns are relevant, but also the other columns provide equal results).
The partial sums are sequentially rowwise, according to the increasing number of involved terms.
Code:
´
partial sums of
   dV(-1/2) * (P~ * U2^0.5*PInv~ )                 | dV(-1/2) *  (dV(1/2)*U4^0.5*dV(2)
= dV(-1/2+1)     * U2^0.5*PInv~                   |

  1.0000000   -1.0000000   1.0000000   -1.0000000  | 1.0000000            .           .            .
  1.0000000  -0.58372269  0.16744539   0.24883192  | 1.0000000  -0.58870501           .            .
  1.0000000  -0.54435941  0.26200562  -0.15293863  | 1.0000000  -0.54185357  0.34657359            .
  1.0000000  -0.54309813  0.29225514  -0.17533567  | 1.0000000  -0.54311283  0.29141023  -0.20402961
  1.0000000  -0.54310930  0.29487702  -0.16270440  | 1.0000000  -0.54310564  0.29508796  -0.15531719
  1.0000000  -0.54310655  0.29496153  -0.16037546  | 1.0000000  -0.54310651  0.29496149  -0.16050320
  1.0000000  -0.54310659  0.29496460  -0.16020536  | 1.0000000  -0.54310662  0.29496477  -0.16018448
  1.0000000  -0.54310664  0.29496487  -0.16019783  | 1.0000000  -0.54310663  0.29496481  -0.16019767
  1.0000000  -0.54310663  0.29496481  -0.16019733  | 1.0000000  -0.54310663  0.29496481  -0.16019734
  1.0000000  -0.54310663  0.29496481  -0.16019734  | 1.0000000  -0.54310663  0.29496481  -0.16019735
  1.0000000  -0.54310663  0.29496481  -0.16019735  | 1.0000000  -0.54310663  0.29496481  -0.16019735
  1.0000000  -0.54310663  0.29496481  -0.16019735  | 1.0000000  -0.54310663  0.29496481  -0.16019735
  1.0000000  -0.54310663  0.29496481  -0.16019735  | 1.0000000  -0.54310663  0.29496481  -0.16019735
...
=======================================================================

Conclusion:

Although essential approximations are poor I'm now much confident, that either with a better tool for convergence-acceleration or with an analytical approach based on the formal description of terms using my solution for the eigen-system, a chance to get a result becomes more realistic and should now be worth a more serious effort.

Gottfried
Gottfried Helms, Kassel
#8
Gottfried Wrote:The speculation behind this is, to extract a functional relation between two u, say u and v, or more precise u_k and v_j, having the same NET b,

As far as I know there is no closed formula for say the function of the upper fixed point for the argument of the lower fixed point. However we can draw a graph, \( f : (1,e)\to(e,\infty) \) though this can extended to \( f : (1,\infty)\to(1,\infty) \) if we choose always the opposite fixed point, i.e. for the fixed point greater than \( e \) we compute the other fixed point that is below \( e \), the graph looks like:
   

Quote:Is it -under this functional relation- *necessary*, that the powerseries, constructed by u_k and v_j give different results? Or can it be shown, that they must give the same result?

Quote:With some more analytical matrix-operations and numerical checks I now tend to the conclusion, that Asimov's proposal may be proven to be false.

Hey Gottfried, it is a quite well known result, that in most cases the regular iteration of an analytic function at different fixed points gives different (usually only slightly differing) functions. Ecalle referred me even to an article about this phenomenon

"Etude theorique et numerique de la fonction de Karlin-McGregor",
Serge Dubuc, Journal d'Analyse Math.. Vol. 42 (1982 / 83)

and I checked it numerically in the bummer thread.
#9
As I just read in Knoebel's Exponential Reiterated there is even a parametrization of the curve \( f \), already given by Goldbach:
Knoebel considers the equation \( x^y=y^x \) which is equivalent to \( x^{1/x}=y^{1/y} \) which means that the fixed points \( x \) and \( y \) have the same base. So \( y=f(x) \). The parametrization is:
\( x=s^{1/(s-1)} \) and \( y=s^{s/(s-1)} \).

We can easily verify that this indeed satisfy \( x^y=y^x \):
\( x^y=s^{s^{s/(s-1)}/(s-1)}=s^{s^{1+\frac{1}{s-1}}/(s-1)}=s^{s x/(s-1)}=y^x \)

For example for \( s=2 \) we get our famous fixed points \( x=2^{1/\left(2-1\right)}=2 \) and \( y=2^{2/\left(2-1\right)}=2^{2}=4 \)

There is lots of other interesting stuff in Knoebel's article, but read it yourself Smile
For our consideration here let:
\( f_1(s)=s^{1/(s-1)} \) and \( f_2(s)=s^{s/(s-1)} \) so that \( f=f_2\circ f_1^{-1} \). So I wonder whether we can express \( f_1^{-1} \) (\( f_1 : (0,\infty)\to(1,\infty) \) and \( f_2 : (0,\infty)\to(1,\infty) \) are indeed bijective) with the Lambert W function. Any ideas?
#10
bo198214 Wrote:There is lots of other interesting stuff in Knoebel's article, but read it yourself Smile
Yes, I've read it several times, and with the growth of my knowledge I understand increasingly more. There may still be something of whose relevance I didn't got aware yet...
Quote:For our consideration here let:
\( f_1(s)=s^{1/(s-1)} \) and \( f_2(s)=s^{s/(s-1)} \) so that \( f=f_2\circ f_1^{-1} \). So I wonder whether we can express \( f_1^{-1} \) (\( f_1 : (0,\infty)\to(1,\infty) \) and \( f_2 : (0,\infty)\to(1,\infty) \) are indeed bijective) with the Lambert W function. Any ideas?
Hmm, I'll have to study this first a bit more.
Gottfried Helms, Kassel


Possibly Related Threads…
Thread Author Replies Views Last Post
  Exotic fixpoint formulas tommy1729 2 695 06/20/2023, 10:10 PM
Last Post: tommy1729
  Iteration exercises: f(x)=x^2 - 0.5 ; Fixpoint-irritation... Gottfried 23 65,514 10/20/2017, 08:32 PM
Last Post: Gottfried
  (Again) fixpoint outside Period tommy1729 2 7,293 02/05/2017, 09:42 AM
Last Post: tommy1729
  Polygon cyclic fixpoint conjecture tommy1729 1 5,945 05/18/2016, 12:26 PM
Last Post: tommy1729
  The " outside " fixpoint ? tommy1729 0 3,802 03/18/2016, 01:16 PM
Last Post: tommy1729
  2 fixpoint pairs [2015] tommy1729 0 4,159 02/18/2015, 11:29 PM
Last Post: tommy1729
  [2014] The secondary fixpoint issue. tommy1729 2 8,304 06/15/2014, 08:17 PM
Last Post: tommy1729
  Simple method for half iterate NOT based on a fixpoint. tommy1729 2 8,268 04/30/2013, 09:33 PM
Last Post: tommy1729
  2 fixpoint failure tommy1729 1 6,030 11/13/2010, 12:25 AM
Last Post: tommy1729
  abs f ' (fixpoint) = 0 tommy1729 2 9,178 09/09/2010, 10:13 PM
Last Post: tommy1729



Users browsing this thread: 1 Guest(s)