A nice series for b^^h , base sqrt(2), by diagonalization Gottfried Ultimate Fellow Posts: 880 Threads: 129 Joined: Aug 2007 03/05/2009, 04:30 PM (This post was last modified: 03/06/2009, 10:27 PM by Gottfried.) Hi fellows - just posted three msgs to sci.math. With the third msg it got much interesting, so I'll copy it to our forum, just without further comment. (msg 1) "Another tetration-series for base sqrt(2)" I didn't see this series before, so just for the record. Seems to be the most simple form of a series for tetrates, which can be extracted from the diagonalization-approach. The type of series is characteristic, but is base-dependent. Here for base b = sqrt(2) : Code:b^^h = 2         - 0.632098661051 *u_h           - 0.225634285681 *u_h^2          - 0.0854081730270*u_h^3         - 0.0335771160755*u_h^4         - 0.0135675339902*u_h^5          - 0.00559920683946*u_h^6         - 0.00235003288785*u_h^7        - 0.00100003647235*u_h^8         - 0.000430480708304*u_h^9         - 0.000187116458671*u_h^10      - 0.0000820114021745*u_h^11      - 0.0000362027647360*u_h^12         - 0.0000160807242165*u_h^13     - 0.00000718169500164*u_h^14     - 0.00000322271898338*u_h^15         - 0.00000145228984161*u_h^16    - 0.000000656926890186*u_h^17    - 0.000000298154140684*u_h^18         - 0.000000135730176453*u_h^19   - 0.0000000619577730720*u_h^20   - 0.0000000283522848887*u_h^21         - 0.0000000130033888474*u_h^22  - 0.00000000597608342584*u_h^23  - 0.00000000275165501559*u_h^24         - 0.00000000126917896344*u_h^25 - 0.000000000586333973928*u_h^26 - 0.000000000271274348008*u_h^27         - 1.25680399977 E-10*u_h^28     - 5.83015364711 E-11*u_h^29      - 2.70775077830 E-11*u_h^30         - 1.25898302877 E-11*u_h^31         - O(u_h^32) where u=log(2), such that exp(u/exp(u)) = b and must be read as u_h = u^h Note, that if h->inf, all except the first term of the series vanish so that 2 remains; if h=-1, that means u_h~ 1.442 the series seem to converge to zero and for h=-2 the series seems to diverge slowly to -infty. (which is obviously what we expect with tetration) (msg 2) Hmm, just found an argument for the last hypothese. Saying the series diverges for h=-2 to infinity reminds to the harmonic series. So let's see, whether a rescaling of the coefficients by u^2 tends to the harmonic series or a scalar multiple. To see the convergence better, the coefficients are also scaled by the reciprocals of natural numbers. Then I get the following series, where u_h2 means u^(h+2), such that for h=-2 all powers of u become 1 Code:b^^h =         2 +           -1.31563054605  / 1   * u_h2           -1.95493914978  / 2   * u_h2^2           -2.31029756506  / 3   * u_h2^3           -2.52057540946  / 4   * u_h2^4           -2.64981962459  / 5   * u_h2^5          ...           -2.88539007305/ 50   * u_h2^50           -2.88539007576/ 51   * u_h2^51           -2.88539007762/ 52   * u_h2^52           -2.88539007891/ 53   * u_h2^53           -2.88539007980/ 54   * u_h2^54           -2.88539008041/ 55   * u_h2^55           -2.88539008083/ 56   * u_h2^56           -2.88539008113/ 57   * u_h2^57           -2.88539008133/ 58   * u_h2^58           -2.88539008147/ 59   * u_h2^59           -2.88539008156/ 60   * u_h2^60           -2.88539008163/ 61   * u_h2^61           -2.88539008168/ 62   * u_h2^62           -2.88539008171/ 63   * u_h2^63 The coefficients seem to approximate 2.88539008178... = 2/u and we get indeed asymptotically the form of the harmonic series when h=-2. Cute... This would be nice to be proven. But how? :-) (msg 3) Using 128 terms for the series, and 30 digits shown we get Code:-2.885390081777926814 49154212159   -2.885390081777926814 56186809057   -2.885390081777926814 61052960248   -2.885390081777926814 64420113651   -2.885390081777926814 66750068482   -2.885390081777926814 68362344014   -2.885390081777926814 69478019929   -2.8853900817779268147 0250066967   -2.8853900817779268147 0784331383   -2.88539008177792681471 154053478   -2.88539008177792681471 409912721   -2.88539008177792681471 586977937     at k=127    ...   -----------------------|---------   -2.88539008177792681471 984936200    -2/log(2) assumed to be the lower bound Very nice... Gottfried Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 880 Threads: 129 Joined: Aug 2007 03/08/2009, 03:41 PM (This post was last modified: 03/08/2009, 04:11 PM by Gottfried.) Hmm, this gives then also a strange formula for a limit. It looks as if this would give $\lim_{ x->0} b\^\^^{\tiny{-2}} - \log_b(x) = b\^\^^{\tiny{\infty}}$ Using b as base, b = t^(1/t) u=log(t), such that log(b) = u/t and t is a fixpoint the above series is formally Code:´   b^^h = t     - sum    t/u * coeff[k]/k  * (u^(h+2))^k        = t     - sum    coeff[k]/k  *(u^(h+2))^k / log(b) if h = -2 we get the zero'th powers of u (=1) at each coefficient and Code:´   b^^h = t     -  sum  coeff[k]/k* 1 / log(b) and since the coefficients converge to 1 this is in principle in the limit a zeta(1)-series Code:´   b^^(-2) = t     - zeta(1) / log(b)      // limit h->-2 or the log(0) to base b Code:´   b^^(-2) = t +     log_b(0) t is the fixpoint, so t = b^^inf and we have Code:´   b^^(-2) - log_b(0) = b^^inf   == fixpoint or better expressed as limit   lim {eps->0}  b^^(-2+eps) - log_b(0+eps) = b^^inf ??? Gottfried Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 880 Threads: 129 Joined: Aug 2007 03/08/2009, 09:59 PM (This post was last modified: 03/08/2009, 10:00 PM by Gottfried.) I attempted a proof this way. Don't know, whether this is sufficient... Using b=t^(1/t), u=log(t), log(b) = u/t , t being the fixpoint Assume all the following lines as limit-expressions for @->0 (saving me typo-load) Code:´    b^^(-2+@) - log_b(@)      = b^^inf   == fixpoint    b^^(-2+@) - log(@)/log(b) = b^^inf                              =  t   log(b)* b^^(-2+@) - log(@) = u/t * t                              = u   log(b)* b^^(-2+@)          =  u +  log(@) exponentiate    b^(b^^(-2 + @)   = t * @ =   b^^(-1 + @) exponentiate again, using base b    b^(b^^(-1 + @))   = b^(t * @ )                      = t^@ = b^^(0 + @)        = t^@ then this is        lim @->0    b^^(0 + @) = 1        lim @->0    t^@        = 1 saying      lim { @->0 }  b^^(-2 + @ ) - log_b( @ ) = b^^inf is correct in the limit. Is this -at least in principle - sufficient? Gottfried Gottfried Helms, Kassel bo198214 Administrator Posts: 1,615 Threads: 101 Joined: Aug 2007 03/09/2009, 01:28 AM (This post was last modified: 03/09/2009, 01:42 AM by bo198214.) Gottfried Wrote:I attempted a proof this way. Don't know, whether this is sufficient... If you write it from bottom up Ok, let me verify. We want to prove: $\lim_{\delta\to 0+}b[4]\delta - \log_b(\delta) = \lim_{n\to\infty} {\exp_b}^{\circ n}=:t$ for each tetration [4] and $1. First tetration should be differentiable at -1. As b[4](-1)=0 this means $\lim_{\delta \to 0+} \frac{b[4](\delta-1)}{\delta} = \left.\frac{\partial b[4]x}{\partial x}\right|_{x=-1}=:c>0$ then we can take on both sides the logarithm $\log_b$, which is a continous function and hence can be put under the limit. $\lim_{\delta \to 0+} \log_b(b[4](\delta -1)) - \log_b(\delta) = \log_b( c)$ hence $\lim_{\delta \to 0+} b[4](\delta -2) - \log_b(\delta) = \log_b( c)$ So we see that the limit exists, but it exists even for $b>e^{1/e}$ and depends on the derivation of $b[4]x$ at $x=-1$. The derivation at -1 can be derived from the value at 0 by: $\text{sexp}(x+1)' = \exp_b(\text{sexp}_b(x))' = \ln(b)\text{sexp}(x+1)\text{sexp}_b(x)'$ hence for $x=-1$ $\text{sexp}'(0)=\ln(b)c$ then the limit can be given as $\lim_{\delta \to 0+} b[4](\delta -2) - \log_b(\delta)=\log_b\left(\frac{\text{sexp}'(0)}{\ln(b)}\right)$ Gottfried Ultimate Fellow Posts: 880 Threads: 129 Joined: Aug 2007 03/10/2009, 08:01 AM bo198214 Wrote:then the limit can be given as $\lim_{\delta \to 0+} b[4](\delta -2) - \log_b(\delta)=\log_b\left(\frac{\text{sexp}'(0)}{\ln(b)}\right)$ Well, thanks! Looks good... However, I must have missing the core discussion about the derivatives - how do we get to values of sexp(h)' at h=0 or elsewhere? And I think to see the chainrule in application, Quote:The derivation at -1 can be derived from the value at 0 by: $\text{sexp}(x+1)' = \exp_b(\text{sexp}_b(x))' = \ln(b)\text{sexp}(x+1)\text{sexp}_b(x)'$ hence for $x=-1$ $\text{sexp}'(0)=\ln(b)c$ but if the height-parameter is the variable, how can we extract one instance of exp from sexp in context with the chainrule? (Maybe I'm misreading, though) I'll have another look into the faq/ref today. Anyway. The confirmation for the validity of the formula is also one for the appropriateness for the guess about the limit behaviour/tendency of the series and makes me more confident, that with that series (and the underlying diagonalization) we are on the right track - don't you think so? By the way, another bit: I had the unsolved problem, that the diagonalization with fixpointshift does not give even near approximations for fractional heights if the base>e^(1/e) and the fixpoint is complex; that's why I was dismissing the method for such cases (which are the majority of cases... ), but yesterday I saw, that it seems to work in the sense of height-differences, so sexp(x1+1) = exp(sexp(x1)) // whatever sexp(x1) // may be... , and the situation focuses on the problem of finding the norm-parameter for the schröder/eigenvector-function which is still too difficult for me due to divergence of the complex series. (Ithink I'll put another note about this into the "matrix-operator-method" thread later) Gottfried Gottfried Helms, Kassel bo198214 Administrator Posts: 1,615 Threads: 101 Joined: Aug 2007 03/10/2009, 02:20 PM Gottfried Wrote:However, I must have missing the core discussion about the derivatives - how do we get to values of sexp(h)' at h=0 or elsewhere?We dont get it, except from a specific tetration (while the formula I gave is universal to all tetrations). However if we have the derivation at 0 then we have it also at -1 and all natural numbers (by the chain rule). Quote:Anyway. The confirmation for the validity of the formula is also one for the appropriateness for the guess about the limit behaviour/tendency of the series and makes me more confident, that with that series (and the underlying diagonalization) we are on the right track - don't you think so? I dont even know what series you are talking about, I just saw a bunch of floating numbers that are said to have something to do with the matrix/diagonalization approach. I would really appreciate if you could be more clear in your terminology: The matrix power approach makes use of the established method to obtain non-integer powers (and other analytic functions) of finite matrices via diagonalization. This is applied to the truncations of the Carleman/Bell matrix. The matrix power method can be applied at different development points. If it is applied to fixed points then it is equal to the regular iteration. Thatswhy the latter case is not really interesting for me, there are better limit methods available than expanding powerseries to compute regular iterations, and nearly everything is already known about regular iteration. However different the personal interests are, I would appreciate if you clearly specifiy the development point for your matrix method application. Particularly also because it is still unknown how the matrix power approach is dependent on the development point. It seems that it yields different results at different development points. Quote:By the way, another bit: I had the unsolved problem, that the diagonalization with fixpointshift does not give even near approximations for fractional heights if the base>e^(1/e) and the fixpoint is complex; that's why I was dismissing the method for such cases (which are the majority of cases... ), Hm, the regular iteration (=matrix power at fixed point) at the primary complex fixed point $\lambda$ has singularities on the real axis, i.e. at $\exp^{\circ n}(0)$, $n=0,1,\dots$ but no singularities at the upper half plane or elsewhere on the real axis. The radius of convergence is $|\lambda|$, i.e. the distance from the development point $\lambda$ to 0, as $\Re(\lambda)\approx 0.3$ and 0 is closer to $\lambda$ than 1. So the only values on the real axis for which the powerseries converges is on the open interval $(0,2\Re(\lambda))$. Gottfried Ultimate Fellow Posts: 880 Threads: 129 Joined: Aug 2007 03/10/2009, 03:24 PM (This post was last modified: 03/10/2009, 04:34 PM by Gottfried.) Hi Henryk - bo198214 Wrote:We dont get it, except from a specific tetration (while the formula I gave is universal to all tetrations). However if we have the derivation at 0 then we have it also at -1 and all natural numbers (by the chain rule). I see. And I also see that I've to undergo a private derivative-workshop soon. When I re-read in our forum last days I (again) found so many things which I'd missed in time when they were posted, - because I was too much consumed by the questions I had to solve from my own approach, and had no other space... Quote:I dont even know what series you are talking about, I just saw a bunch of floating numbers that are said to have something to do with the matrix/diagonalization approach. Yes, in the first msg I had only a "bunch-of-numbers"-series for which I claimed, it would provide the tetrate of sqrt(2) given the height h in the u^h-parameter. It's just a normed form of the inverse schröder-function sigma°-1 which occurs in the diagonalization. Not much finding, anyway, just to mention for the record. In the second post I had the idea to decompose the coefficients of the series into some simple components: the reciprocals of natural numbers, the second-powers of u. and a remainder (for each coefficient). Now the series looks no more like an anonymous object, but with a clear property, that the so-found remainders converge to the reciprocal of the log(b) : $\lim_{k->\infty} r_k = \frac{1}{\log(b)}$ where r is the remainder as described above The terms as given in the first msg can thus be decomposed in this form (for k>0) $c_k*(u^{h})^k = \frac{r_k}{k}*(u^{2+h})^k$ or, using coefficients a, where the log(b) -part is also extracted $b\^\^ ^h = t - \frac{\sum_{k=1}^{\infty} \frac{a_k}{k}*(u^{2+h})^k }{\log(b)}$ Here, according to the visual impression, 0 < a_k <1 approching a_k->1 for k->inf, and, in a second view, even strictly increasing Now two considerations: a) what does it mean, if the visually apparent convergence actually exists? Well, it means that in the limit h-> -2 we would get the zeta(1)-singularity, which can also be written differently as log(eps), for eps->0 and the interesting limit-identity at h=-2 including the two infinite expressions and the fixpoint. b) This is then interesting - and I asked: does such an identity(in the limit, though) make sense/exists at all? It does/exists, as our discussion show. From this, in turn, the series in its decomposed form, the remainders as coefficients, seems to "implement" just this interesting identity for the limit case. and gets then from this a special justification. The series shows the coefficients for the base b=sqrt(2). But as it is just taken from the (normed) Schröder-inverse, the same can be said for other bases for 1e^{1/e}$ Ill answer the other part of your post later and wish you all the best for your health! Henryk Gottfried Ultimate Fellow Posts: 880 Threads: 129 Joined: Aug 2007 03/10/2009, 07:00 PM (This post was last modified: 03/10/2009, 08:26 PM by Gottfried.) bo198214 Wrote:However I want a clear distinction whether you use "matrix method with fixpoint shift" which is nothing else than regular iteration or whether you use "matrix method at a non-fixpoint" which is a different method which is also capable to obtain real iterates for $b>e^{1/e}$ I see again the specification of "finite matrices" in the earlier post.. Whatever I derive in my matrix-formulae, I'll consider this for the case of infinite dimension only. This makes no difference where we have triangular matrices, but for instance if inverse matrices or eigenmatrices of non-triangular matrix-operators/Bell-/Carleman-matrices are under discussion. Only in my very first postings in summer 2007 I referred to empirical, truncated matrices and took inverses and computed fractional iterates -just from the given dimension. After then I tried to determine the entries of the matrices with which I'm working for the infinite size only and to strictly to rely only on such matrices (though they are also truncated). So, for an instance, it surprised me when I recently came to learn Andy's slog-matrices in more depth and understood, that the slog is using the inverse of a finite square matrix and develops its coefficients from that (however tending to find the limiting values, but see especially Jay d Fox's massive investigation for the characteristics of the numerical errors). You gave a partial alternative computation-scheme, which at least lets the coefficients be constant with the increasing matrix-size. Such a class of approaches I would call essentially polynomially approximations, where not only the truncation of series introduce errors but also the coefficients themselves are approximates as well. The concept of use of infinite matrixes faces then the case of non-invertibility of matrices, for instance the (square) Bell-matrix for b^x. However, the Bellmatrix can be decomposed into two triangular matrix-factors, (both of infinite size) which *can* be inverted meaningfully (pertaining exact entries). Only the product of the inverted factors *cannot* be defined. This is the position, where the fixpoint-shift steps in: If we have the Bellmatrix B for b^x , then we cannot invert B for the infinite case. But B = fS2F * P~ // both triagular, invertible and we want to use the inverse, we can write PInv~ * fS1F but this has singularities for the infinite case and we are not allowed to evaluate this. (so I marked the "*"-multiplication red as "forbidden") But P (as well as then PInv) are the binomial-matrices and they perform addition when operating on the powerseries: V(x)~ *PInv~ = V(x-1)~ That's where the shift comes in: we can safely consider the matrix-equation (using b=exp(1) = e and I write the result to the lhs): V(e^x) = V(x)~ *B = V(x)~ * (fS2F * P~ ) Then rearranging the invertible P as PInv to the left V(e^x)*PInv~ = V(x)~ * fS2F then the invertible fS2F into fS1F to the left V(e^x)*PInv~ * fS1F = V(x)~ where the product to construct B^-1 is forbidden *in the infinite case* but, applying the binomial-theorem by PInv V(e^x - 1)~ * fS1F = V(x) ~ which is perfectly ok, and this leads then, since fS1F performs log(1+x) . V(e^x - 1)~ * fS1F = V(x) ~ . = V( log(1 + (e^x-1))~ . = V( log(e^x))~ . = V(x)~ = V(x)~ Note: this matrix-algebra is only valid if infinite size is assumed everywhere. I do the same with the eigensystem-decomposition / Schröder-function. I found the fixpoint-shift with my matrix-notation by simply proceeding from the initial equation V(x)~ * B = V(y)~ decomposing B into matrix-factors P, P^-1 and a triangular C V(x)~ * P^-t ~ * C * P^t ~ = V(y)~ applying binomial-theorem with the -t'th power of P~ on rhs und lhs V(x-t)~ * C = V(y - t)~ // implements shift by t = fixpoint where then C is triangular and allows eigendecomposition providing exact values Again: only for the case of *infinite* size (and where the often inadmissible or badly converging product P^t~ * C is avoided by the fixpoint-shift) So this method has then the (t)error of computation *only* in the truncation of the powerseries (or in the truncation of the dot-product V(x-t)~*C[,1]), but where all *used coefficients* are *exact* (as far as logs and exp's are assumed as exact). All in all- I use the "regular iteration with fixpoint-shift", but only as far as I can represent it coherently in terms of infinite matrices/known closed-form expressions for sums of infinite powerseries, which result by the implicte dot-products. Thus I have the difficulties with b>eta, since the occuring complex-valued matrices C give unsatisfying powerseries, and I have not yet the remedy to deal with that series appropriately. Quote:Ill answer the other part of your post later and wish you all the best for your health! Henryk Yes, thanks! It's progressing diabetes, and sometimes I'm fitter, sometimes struck down, and in general just less powerful and regenerable than up to recent years. Just life.. Gottfried Gottfried Helms, Kassel Ivars Long Time Fellow Posts: 366 Threads: 26 Joined: Oct 2007 03/10/2009, 09:09 PM (This post was last modified: 03/10/2009, 09:09 PM by Ivars.) Hi Gottfried, Take care! What is infinite matrix You are talking about? Can it be seen as infinite 2 dimensional distribution of some values? Can it be given some stochastic, probabilistic interpretation? Sorry if my question is off limits Ivars « Next Oldest | Next Newest »

 Possibly Related Threads… Thread Author Replies Views Last Post Kneser-iteration on n-periodic-points (base say \sqrt(2)) Gottfried 11 8,270 05/05/2021, 04:53 AM Last Post: Gottfried Mathematica program for tetration based on the series with q-binomial coefficients Vladimir Reshetnikov 0 4,787 01/13/2017, 10:51 PM Last Post: Vladimir Reshetnikov Single-exp series computation code mike3 0 4,864 04/20/2010, 08:59 PM Last Post: mike3 Computations with the double-exp series mike3 0 4,243 04/20/2010, 07:32 PM Last Post: mike3 intuitive slog base sqrt(2) developed between 2 and 4 bo198214 1 6,534 09/10/2009, 06:47 PM Last Post: bo198214 sqrt(exp) Kouznetsov 15 32,587 12/20/2008, 01:25 PM Last Post: Kouznetsov

Users browsing this thread: 1 Guest(s)