the inconsistency depending on fixpoint-selection - Printable Version +- Tetration Forum ( https://math.eretrandre.org/tetrationforum)+-- Forum: Tetration and Related Topics ( https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=1)+--- Forum: Mathematical and General Discussion ( https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=3)+--- Thread: the inconsistency depending on fixpoint-selection ( /showthread.php?tid=119) |

the inconsistency depending on fixpoint-selection - Gottfried - 02/07/2008
Hi - this concerns the remark of Henryk in the "Bummer"-thread. Short review, paraphrasing Henryk: According to D.Asimov in the 90ties the results of tetration, when expressed as powerseries-function, are dependent on the used fixpoint; for instance for base b=sqrt(2) the results for fractional iteration using the fixpoints t_1=2 and t_2=4 , where b=t^(1/t) , are different. This difference may have been overseen because it is small, of order of about 1e-24. I assume this is a very important problem; and seems difficult to be shown by computation (at least using truncated powerseries of about few dozen terms) Anyway, in the last days I tried to get a better grip of the problem - please share your thoughts, whether this may be a fruitful approach or whether it will likely be useless, leading nowhere... ----------------------------------------------------------- I turn the view from determining fixpoint t from base b into that of determining base b from a selection of a fixpoint t, moreover, from a uniquely selected complex parameter u, then its expoential t and then to the base b. This makes first * select a complex u * define t =exp(u) * define b = exp(u/t) = exp(u/exp(u)) with the required consequence, that ...^b^b^b^t = b^b^b^t = b^b^t = b^t = t Now we may also change our wording: not to say, t is a fixpoint of b, but "b is a N_eutral E_lement for t (=exp(u)) under operation of T_etration " (Use "N E T" for short). In multiplication, for each parameter t there is one neutral element, b=1, which is the same for all t: ...*b*b*t = b*b*t = b*t = t (and b can be seen as b=t*1/t, where in tetration we have b=t^(1/t) - but, well, that's numerology so far, without obvious further use) Also we know, that, 1) different from the operation of multiplication, each t has an individual NET b, and also 2) different t may have the same b. This requires more notation. Since we should use one index u, thus write b_u; and since moreover the exp(x)-function assumes the same value for different x with x_k = x + k*2 Pi i we should fully index the NET's as or, since all t_k=exp(u_0 + k*2 Pi i) are equal, we may write One property is immediately obvious: if t is rational (or, u_0 is the principal log of a rational), considering different k_j = {k_1,k_2,k_3,...} * if k_j/t is integer, we have periodicity for different k_1,k_2,... resulting in the same NET for all u_k_j Thus: while the map u->exp(u) gives a graph, which is periodic for u0 + k*2 Pi i , the map of u-> b = exp(u*exp(-u)) is not periodic, except if u is the log of a rational number t. I've drawn such graphs for these mappings, and the latter (u->b) gives a more smooth image than that of t->b, which we see more often in tetration-discussion (however my graphing-capabilities are poor due to lack of a powerful commercial CAS) Also, we are in a posiion, that for the tetrationfunction T_b°h(x), rewritten as T_u_k°h(x) = exp(u_k/t *x) (for h=1) we have the whole complex plane as domain for u_k, (since the denominator in exp(u_k/exp(u_k)) cannot be zero). Then we have as the domain for b the whole complex plane, punctured at the origin by the very initial functional definition (and don't need consider b=0 other than a limit, for instance). I don't know, whether this change of focus does help in any way to proceed deeper into the inconsistency-problem; I've no convincing benefit of it so far. --------------------------------------------------------------- Another current approach of mine to find any idea about this is to (re-) consider the powerseries of first. This is relatively simple, the first terms look like " align="middle" /> The associated matrix-operator is an infinite square-array; the top left looks like The entries in the 2'nd column are the coefficients of the powerseries for f_0(u). Since f_0(0)=/=0, we cannot easily iterate that operator, or invert it to get a powerseries for the inverse function But M0 is decomposable into two triangular matrices. Let the above operator be called M0, then Code: `.` From the construction of M0 it is, in matrix-notation, formally: Code: `.` But since P~ performs just a shift by one unit for the involved powerseries, we can write Code: `.` M begins with and the second column of M provides now the coefficient for the function " align="middle" /> Also, M is triangular and thus invertible, and the reciprocal provides the inverse operation for the powerseries. Its reciprocal's top left edge looks like giving the coefficients for the inverse function of f(u)=b-1 by Code: `.` resulting in " align="middle" /> and then This powerseries allows principally to compute u0 from given b by evaluating g(b-1). Since this is in principle a variant of the h()-function I tried to relate the coefficients to that of the h(), but did not succeed yet. Also it may be of little use, since its radius of convergence allows the usual small range of 1/e^e < b < e^(1/e) only and also converges slowly if b deviates from 1. (On the other hand, it contains my birthday at its 4'th term, so it's at least a function with a personal flair...) --------------------- So, I've nothing breathtaking new here; it's just to try to find any useful path to the encirculation and possibly resolution of the "bummer"-problem. Gottfried RE: the inconsistency depending on fixpoint-selection - Gottfried - 02/07/2008
Well, rethinking a bit (and a involving bit of latte makkiato ) I feel, I should add one thought: why this all... The speculation behind this is, to extract a functional relation between two u, say u and v, or more precise u_k and v_j, having the same NET b, and to consider the eigensystem-based formula for the powerseries-expansion of tetration. Is it -under this functional relation- *necessary*, that the powerseries, constructed by u_k and v_j give different results? Or can it be shown, that they must give the same result? Maybe, maybe it is even likely, this is of no help. I think, a very basic reconsideration is needed for the focused problem. Gottfried RE: the inconsistency depending on fixpoint-selection - GFR - 02/07/2008
Well ... latte makkiato ist immer gut! I think that the two real h's (of the t's) corresponding to the same b = sqrt(2), this time only with the + sign (sorry, Bo, ... it's the age), i.e. h = 2 and h = 4 must be different indeed. Any ... serious serial development should indeed show this situation. We cannot think that we may start from b = sqrt(2) and than, ... bingo! ... we suddely have two different values. The "strange object" that we may call h or t is the result of the application of a "two-valued function". But, two-valued "functions" are not politically correct animals. But, perhaps, I don't understand what you precisely said. I didn't sleep well, last night. I shall improve! Tomorrow is another day. GFR RE: the inconsistency depending on fixpoint-selection - Gottfried - 02/07/2008
GFR Wrote:I think that the two real h's (of the t's) corresponding to the same b = sqrt(2), this time only with the + sign (sorry, Bo, ... it's the age), i.e. h = 2 and h = 4 must be different indeed. Any ... serious serial development should indeed show this situation. We cannot think that we may start from b = sqrt(2) and than, ... bingo! ... we suddely have two different values. The "strange object" that we may call h or t is the result of the application of a "two-valued function". But, two-valued "functions" are not politically correct animals.Hi Gianfranco - I've begun a short consideration of the multivalued log(1+x)-function in the context of matrix-operations (extended version of ContinuousIteration, which may of interest. It seems interesting, but it exhibits, that we also need a more general notion of divergent summation, especially for the complex case. Since Euler-Summation (although principally able) is not well suited (and apparently not much studied) for the complex case, I'm always at the edge of possibilities. At least I cannot proceed much more without finding a reliable base for such summation-concepts. It seems, that all (or nearly all) fractional iterations of tetration, (if based on powerseries) produce hypergeometric divergent powerseries with convergence radius zero, not only the x->exp(x)-1 version. These series cannot be Euler-summed (principally) and thus we need this concept of assigning valued to such divergent series. The powerseries for log(1+x) = x - x^2/2 + x^3/3 - + ... may be configured for multivaluedness by log_k(1+x) = k*2 Pi i + x - x^2/2 + x^3/3 - + ... and the matrix-operator is then square and has all the nasty properties of divergent series. Disclaimer: all this is merely more or less speculation, and only motivated to find any usble entry-point to access the problem, which I focus in this thread. Well - have a good night, I'll stop soon, too; I've to be prepared for an exam of my statistics-class tomorrow. I'll need hawk's eyes... Kind regards - Gottfried RE: the inconsistency depending on fixpoint-selection - GFR - 02/09/2008
Dear Gottfried, concerning: Gottfried Wrote:ContinuousIteration, ... may of interest. I read your interesting report and I have to confess that I didn't assimilate all its contents. I measured, in doing so, the achievement of my ... perfect incompetence level, according to the Peter's Principle. I shall read it, calmly, again ! In particular, I read with a lot of interest the list of important questions that still need to be answered, among which: "Why tetration oscillates for b < e^(-e)?". As a matter of fact, I think that it also oscillates when e^(-e) < b < 1, before reaching a constant real asymptotic value, depending on b, when x -> oo. Why that ? Perhaps, because y = b[4]x is complex, for b < 1 ? But, we need a demonstration for saying that. Maybe your matrix approach will help. Please find attached some considerations on serial developments, based on information found in the Web. It may be useful. But, if y = b[4]x oscillates at b < 1, what will be the meaning of the superlog, in that domain? Think of a simple "wild" graphical inversion or of a more mathematically correct non-inversiblity of it. Have we to consider that, perhaps, the slog "cannot be defined" for b < 1? This would have immediate consequences in the definition of "pentation" in that base area. GFR NB: This is my second posting, slightly modified. The last one was lost. Error at my side? Obscured, because badly formulated ? By the way, "latte*makkiato", in general, is not equal to "kaffé*makkiato", but there is a fixpoint. Something like "kapputschino". Amazing ! RE: the inconsistency depending on fixpoint-selection - Gottfried - 02/09/2008
Just a short note - the EIE-consideration on page 19 are meaningless in this context. I was just editing and missed to delete that part, when I uploaded the pdf-file. (It was part of the NET-thoughts of my introductory post, in an early state) Please reload the corrected version. Gottfried RE: the inconsistency depending on fixpoint-selection - Gottfried - 03/03/2008
With some more analytical matrix-operations and numerical checks I now tend to the conclusion, that Asimov's proposal may be proven to be false. I checked the occuring matrices in more depth and found very convincing numerical results, that -at least for the case b=sqrt(2), t0=2 and t1=4- the results are equal. For integer heights this is obvious from the scalar expression only (sqrt(2)^2 = 2, sqrt(2)^4=4 and so on), the problem occurs with fractional heights. I checked this now for h=1/2, using the powerseries, which I get using eingensystem-decomposition/diagonalization. I used a better routine to compute the eigen-matrices up to dimension 160x160 and some transformations get those different eigenmatrices summable to the same values. Although this is only in certain numerical approximation the involved transformations are simple binomial-transformations, which then may be analytically derived as well. So my challenge is now to put some effort to perform those analytical derivations since the possibility of success is backed better. Let b=sqrt(2), t0 = 2, t1 = 4, h the height using h=1/2 Then, in my usual notation we should get V(x)~ * Bb^0.5 =V(y)~ where y = T_b°0.5(x) With the fixpoints, using the h()-function with indexes t0 = h_0(b)=2 t1 = h_1(b)=4 this is equivalent to the two different fixpoint-based matrix expressions (using PInv for P^-1) Code: `´ ` Code: `´ ` Rearrange dV(1/t0): Code: `´ ` Now set t1/t0=a and the part dV(1/a)*PInv ~ in the rhs can be expanded according to Code: `´ ` and we get, with a= 2, also writing U2 for U_t0 and U4 for U_t1 Code: `´ ` and this gives then, arranging P^-2 and P^2 to the left: Code: `´ ` We see, that -unfortunately- we still have infinite sums in the lhs, namely all rows of P~ are infinite as well as the columns of U2^0.5. So we have either to determine these sums analytically (for what I've no solution currently) or employ accelerating methods, like Euler-transform/summation. It occurs, that these row/column-products are not well Euler-accelerable; the Euler-sum does not converge well. The second multiplication, U2^0.5 * PInv~, however provides a simple result: since only the second column of the result is interesting, only the top left 2x2-segment of PInv~ is of relevance, and this simply subtracts the first [1,0,0,0,...] column of U2^0.5 from its second column - which is happily trivial. Applying Euler-summation anyway we get for the first few items of the lhs and rhs identity within a certain range of accuracy, see end of msg. Some more tests gave even better accuracy with an additional powerseries in x, which makes -for abs(x)<1- the resulting powerseries convergent using the first 160 terms: Code: `´ ` where the lhs can be rewritten using the binomial-theorem and makes Code: `´ ` where the implicte infinite series in the matrix-product P~*U2^0.5 are now removed. Tests with various x, which make the matrix-multiplication convergent, should then give the same results for the lhs and rhs. However, using various different x does not prove the identity, but makes it more likely. (see last example, where x was set x=-1/2) The matrix U2^0.5 was constructed from the analytic eigen-decomposition (160x160): Let u0=log(t0)=log(2), u1 = log(t1)=log(4) Code: `´ ` where W2 is the matrix of eigenvectors and D2 the matrix of eigenvalues of the matrix U2=dV(u0)*S2. Since the matrix U2 is triangular, their eigenvalues can be taken from the diagonal (and are thus identical to the entries in dV(u0)) and their eigenvector-matrices are assumed to be triangular, too. Using my analytical description for the eigenmatrices we get exact terms for any dimension. The same method was applied to U4 (based on the second fixpoint): Code: `´ ` ======================================================================= Documents: ------------------------------------------------------------- Always: rows 0..10 , 149..159, columns 0..3; col 1 is of interest in U2^0.5 and U4^0.5 Code: `´` Comparision of first 8 terms, produced by the different fixpoints-matrices. With Euler-summation of different orders for the non-converging vector-products in P~ * (U2^0.5*P^-1 ~) I get for the first eight terms Code: `´ ` Here are partial sums if the matrices are used as coefficients for a powerseries in x Here is x=-1/2 for the two versions from U2 and U4. The approximations are very good and both results seem to be equal (only the 2'nd columns are relevant, but also the other columns provide equal results). The partial sums are sequentially rowwise, according to the increasing number of involved terms. Code: `´` Conclusion: Although essential approximations are poor I'm now much confident, that either with a better tool for convergence-acceleration or with an analytical approach based on the formal description of terms using my solution for the eigen-system, a chance to get a result becomes more realistic and should now be worth a more serious effort. Gottfried RE: the inconsistency depending on fixpoint-selection - bo198214 - 03/04/2008
Gottfried Wrote:The speculation behind this is, to extract a functional relation between two u, say u and v, or more precise u_k and v_j, having the same NET b, As far as I know there is no closed formula for say the function of the upper fixed point for the argument of the lower fixed point. However we can draw a graph, though this can extended to if we choose always the opposite fixed point, i.e. for the fixed point greater than we compute the other fixed point that is below , the graph looks like: [attachment=260] Quote:Is it -under this functional relation- *necessary*, that the powerseries, constructed by u_k and v_j give different results? Or can it be shown, that they must give the same result? Quote:With some more analytical matrix-operations and numerical checks I now tend to the conclusion, that Asimov's proposal may be proven to be false. Hey Gottfried, it is a quite well known result, that in most cases the regular iteration of an analytic function at different fixed points gives different (usually only slightly differing) functions. Ecalle referred me even to an article about this phenomenon "Etude theorique et numerique de la fonction de Karlin-McGregor", Serge Dubuc, Journal d'Analyse Math.. Vol. 42 (1982 / 83) and I checked it numerically in the bummer thread. RE: the inconsistency depending on fixpoint-selection - bo198214 - 03/04/2008
As I just read in Knoebel's Exponential Reiterated there is even a parametrization of the curve , already given by Goldbach: Knoebel considers the equation which is equivalent to which means that the fixed points and have the same base. So . The parametrization is: and . We can easily verify that this indeed satisfy : For example for we get our famous fixed points and There is lots of other interesting stuff in Knoebel's article, but read it yourself For our consideration here let: and so that . So I wonder whether we can express ( and are indeed bijective) with the Lambert W function. Any ideas? RE: the inconsistency depending on fixpoint-selection - Gottfried - 03/04/2008
bo198214 Wrote:There is lots of other interesting stuff in Knoebel's article, but read it yourselfYes, I've read it several times, and with the growth of my knowledge I understand increasingly more. There may still be something of whose relevance I didn't got aware yet... Quote:For our consideration here let:Hmm, I'll have to study this first a bit more. |