A nice series for b^^h , base sqrt(2), by diagonalization bo198214 Administrator Posts: 1,616 Threads: 102 Joined: Aug 2007 03/11/2009, 12:24 AM (This post was last modified: 03/11/2009, 12:25 AM by bo198214.) Gottfried, we discussed that already in an much earlier post. Your whole infinite matrix computation is just r e g u l a r i t e r a t i o n !!! Let me make it clear again: Gottfried Wrote:If we have the Bellmatrix B for b^x , then we cannot invert B for the infinite case. But B = fS2F * P~ // both triagular, invertible First Bell matrix behave antiisomorphic: let B[f] be the Bell matrix of f at 0, then B[fog]=B[g]*B[f] Now let us assign the variables: B is the Bellmatrix of $\exp$ at 0. P~ is the Bellmatrix of $\tau_1(x)=x+1$ fS2F is the Bellmatrix of $\text{dexp}(x)=\exp(x)-1$ (for my convenience I omit the base sub script at exp and dexp) As dexp has a fixed point at 0 fS2F is upper triangular. Your equation just corresponds to $\exp = \tau_1 \circ \text{dexp}(x)$ $\exp(x) = 1 + \exp(x) - 1$ Quote:But P (as well as then PInv) are the binomial-matrices and they perform addition when operating on the powerseries: V(x)~ *PInv~ = V(x-1)~ exactly: $(\tau_1)^{-1} = \tau_{-1}$, the inverse of $x+1$ is $x-1$. Quote:Then rearranging the invertible P as PInv to the left V(e^x)*PInv~ = V(x)~ * fS2F then the invertible fS2F into fS1F to the left V(e^x)*PInv~ * fS1F = V(x)~ where the product to construct B^-1 is forbidden *in the infinite case* but, applying the binomial-theorem by PInv V(e^x - 1)~ * fS1F = V(x) ~ which is perfectly ok, and this leads then, since fS1F performs log(1+x) Thats clear without touching any matrix: V(e^x - 1) is the Bell matrix of dexp, and fS1F is the inverse of the Bell matrix of dexp: $\text{dexp}^{-1}\circ \text{dexp} = \text{id}$ Quote:I do the same with the eigensystem-decomposition / Schröder-function. I found the fixpoint-shift with my matrix-notation by simply proceeding from the initial equation V(x)~ * B = V(y)~ decomposing B into matrix-factors P, P^-1 and a triangular C V(x)~ * P^-t ~ * C * P^t ~ = V(y)~ applying binomial-theorem with the -t'th power of P~ on rhs und lhs V(x-t)~ * C = V(y - t)~ // implements shift by t = fixpoint where then C is triangular and allows eigendecomposition providing exact values $\exp=\tau_{t}\circ \underbrace{\tau_t^{-1} \circ \exp \circ \tau_t}_{\text{texp}} \circ \tau_t^{-1}$ C is the Bell matrix of texp. $\text{texp}(x)=\tau_t^{-1}\circ \exp\circ \tau_t (x) = \exp(x+t)-t = \exp(x)t -t = t(\exp(x)-1)$ we see that texp has 0 as fixed point and hence is the Bell matrix triangular. And the power can be taken exactly. Quote:All in all- I use the "regular iteration with fixpoint-shift", but only as far as I can represent it coherently in terms of infinite matrices/known closed-form expressions for sums of infinite powerseries, which result by the implicte dot-products. But I still dont see what it brings to rewrite an established method with matrices. Whether you compute it with matrices, with direct powerseries formulas, or with limit formulas (without powerseries), whether with Abel or super function, the result is always the same, its always regular iteration. Quote:Thus I have the difficulties with b>eta, since the occuring complex-valued matrices C give unsatisfying powerseries, and I have not yet the remedy to deal with that series appropriately. I gave the answer already in the previous post. The resulting power series $t+\sum_{n=0}^\infty p_n (x-t)^n$ (where $p_n$ are the coefficients obtained from $C^h$) has convergence radius |t| in your notation. It converges at most for $x\in (0,2 \text{Re}(t))$ on the real axis. Quote:Yes, thanks! It's progressing diabetes, and sometimes I'm fitter, sometimes struck down, and in general just less powerful and regenerable than up to recent years. Just life.. For me it seems you should do something healthy not always brood about the same things. Spring is coming, go out, have a look at the blue sky, or at the young girls passing Gottfried Ultimate Fellow Posts: 889 Threads: 130 Joined: Aug 2007 03/11/2009, 02:09 AM Hi Henryk - just a short reply here. bo198214 Wrote:Gottfried, we discussed that already in an much earlier post. Your whole infinite matrix computation is just r e g u l a r i t e r a t i o n !!! Well, I felt it is needed to make it explicitely, that it is special because of the (even blueish enhanced) hyperlink: Quote:The matrix power approach makes use of the established method to obtain non-integer powers (and other analytic functions) of finite matrices via diagonalization. This is applied to the truncations of the Carleman/Bell matrix. where the keyword "finite matrices" occur. We had that several times and since I had assumed, that I had made it clear that I'm always working on infinite matrices, that remark is surprising. Well, next question. "Why work with matrices if things are otherwise well known..." - I still don't claim something special. It's just my path into the matter: I came from a project, in which I compiled relations between pascal- Stirling, Euler- and other matrices operating on formal powerseries and stumbled on the possibility of iterating functions (other than that of addition, which I had already studied to some nice encounters with the zeta/eta-function) Here I had the iteration of the exponential-function and as you might (frustrated? ) remember a hard and long time to even explain what I'm doing although Aldrovandi, Woon, Comtet and others were long time around in the scene. So its for personal reasons (experience with my matrix "toolbox"), possibly for "historical reasons" (to keep a track consistent) and recently for the connection of iteration-series with series of matrix-powers, which I've not seen yet except possibly in the form of the umbral-calculus, where it might be hidden behind the scene. Just recently someone in sci.math asked for a set of symbolic matrix operations for mathematica or maple. Could be a nice start... I've just reread Andrew's "exact entries for slog-operator" today and found a similar matrix-discussion there: it helps to understand since I've no training in functional analysis, the bit I had was in 1972 to 77 and only in relation for the computer-courses, which were my main subject. ------ Well, but let's not lose the track here, for which I opended the thread. I think I'll have a much closer look at that series for tetration next time, to see whether and if, how, it's a special one. ---------- And, yes: Quote:For me it seems you should do something healthy not always brood about the same things. Spring is coming, go out, have a look at the blue sky, or at the young girls passing that's certainly what I should do. Hope I'll get things working/walking. And why the heck should the girls *pass*? Ok - wish you all a good night Gottfried Gottfried Helms, Kassel bo198214 Administrator Posts: 1,616 Threads: 102 Joined: Aug 2007 03/11/2009, 10:18 AM Gottfried Wrote:Well, I felt it is needed to make it explicitely, that it is special because of the (even blueish enhanced) hyperlink: Quote:The matrix power approach makes use of the established method to obtain non-integer powers (and other analytic functions) of finite matrices via diagonalization. This is applied to the truncations of the Carleman/Bell matrix. where the keyword "finite matrices" occur. We had that several times and since I had assumed, that I had made it clear that I'm always working on infinite matrices, that remark is surprising. For me matrix power method means exactly what I wrote. Take the truncations $B_N$ of an infinite matrix $B$, apply the matrix power $(B_N)^h$ and take the limit $B^h := \lim_{N\to\infty} (B_N)^h$. Now one can apply this method at different development points, i.e. the original function $f(x)=b^x$ is conjugated to the development point $p$: $\tilde{f}(x)=f(p+x)-p$ or written as composition: $\tilde{f}(x)=\tau_p^{-1}\circ f\circ \tau_p$ then $f(x)=\tau_p \tilde{f} \tau_p^{-1}$ we define the application of the matrix power method at point p by $f^h := \tau_p \tilde{f}^h \tau_p^{-1}$. This is the general method. If I now apply the matrix power method to a fixed point p, then the Bell/Carlemann matrix of $\tilde{f}$ is lower/upper triangular. For those matrices the power of the truncation is the same as the truncation of the power. That means the limit to infinity is just an expansion of the matrix, once computed values do not change in that process. So we see that regular iteration is the particular case of applying the matrix power method to a fixed point. And I really honored this method (applied to a *non-fixed point* like 0) because it can do where regular iteration fails: to be able to compute real iterates for $b>e^{1/e}$. This also puzzles me that despite you insinst on regular iteration for $b>e^{1/e}$. Quote:Well, next question. "Why work with matrices if things are otherwise well known..." - I still don't claim something special. It's just my path into the matter: I came from a project, in which I compiled relations between pascal- Stirling, Euler- and other matrices operating on formal powerseries and stumbled on the possibility of iterating functions (other than that of addition, which I had already studied to some nice encounters with the zeta/eta-function) But if you focus too much on only matrices and nothing else, such interesting relations like the convergence radius of the iterates is just out of scope for you. Because it is derived by the interelation of the limit formulas for regular iteration (which is power series free) and power series formulas for regular iteration. Quote:I've just reread Andrew's "exact entries for slog-operator" today and found a similar matrix-discussion there: it helps to understand since I've no training in functional analysis, the bit I had was in 1972 to 77 and only in relation for the computer-courses, which were my main subject. Wah I was born in that time Quote:that's certainly what I should do. Hope I'll get things working/walking. And why the heck should the girls *pass*? Well you are right, they build up a big cluster around your place. Gottfried Ultimate Fellow Posts: 889 Threads: 130 Joined: Aug 2007 03/14/2009, 08:16 AM (This post was last modified: 03/14/2009, 08:47 AM by Gottfried.) bo198214 Wrote:For me matrix power method means exactly what I wrote. Take the truncations $B_N$ of an infinite matrix $B$, apply the matrix power $(B_N)^h$ and take the limit $B^h := \lim_{N\to\infty} (B_N)^h$. The difference for the matrix-power may be "small"; but it may be significant when applied to the matrix-inverse: "take-the-truncated,invert,use-as-approximation" may give then different results from "conclude-the-exact-entries-for-infinite-case,truncate,use-as-approximation". It is even more significant, if we discuss more complex entities like the set of eigenvalues. So these different views of things should be still explicite, and it would be good to keep identifying nomenclature. I tended to give the finite-matrix-based the attribute "polynomially", but this might be not the best choice... Quote:Now one can apply this method at different development points, i.e. the original function $f(x)=b^x$ is conjugated to the development point $p$: Surely. No dissent here. Quote:And I really honored this method (applied to a *non-fixed point* like 0) because it can do where regular iteration fails: to be able to compute real iterates for $b>e^{1/e}$. This also puzzles me that despite you insinst on regular iteration for $b>e^{1/e}$. :-) As it comes to honor... Well, that's not my problem. I surely should have come to my full description-text about my way of thinking in a new pdf-file - I've some first chapters, but it's very complex and I stuck several times soon. I'll be "honoring" that method, too, so we have also no dissent here. I've just left this field and am digging at the other one for the gold. I think if a definitive description for the matrices in the infinite case can be given, (based on the hypothese about eigenvalues) this would be very good and if then also a method for the actual computation were found - this would please me much more than the approximating of 7^^Pi using ad-hoc-eigensystems of finite-size-matrices. Maybe the latter will even be the only way to get to practical values; but then: well, there'll be many people, programs which could do that, very fine, why should I bother, it's not my job/profession/money to calculate values? Quote:But if you focus too much on only matrices and nothing else, such interesting relations like the convergence radius of the iterates is just out of scope for you. Because it is derived by the interelation of the limit formulas for regular iteration (which is power series free) and power series formulas for regular iteration. Here you made a point. However, not in the sense of missing the aspect of convergence-radius; in the contrary: I think I need the matrix-layout for the infinite case to have even better conditions for convergence considerations. And since in important cases we'll miss convergence anyway we can check for summability methods to overstep the range of converge, but in a well-founded manner. But as I learned in some discussions in sci.math in the last monthes it is fruitful to discuss iteration also in terms of the functions themselves - even some very nice and surprising closed forms were discussed which I never could have found with the formal powerseries/matrix-approach. Here opened a much interesting field and I'm actually fiddling with that on a casual manner (you've notices my casual "iteration exercises" also here in the forum). I think I'll go into this much more if I have the feeling, that my questions/ideas with the infinite matrices are solved (or shown to be unsolvable) and I can close that case. Just currently I've applied the (infinite) matrix-concept to Andrew's slog with a nice achievement of insight... :-) So there's still something in it. (I'll need it also for the discussion of iteration series, I think) Gottfried Helms, Kassel andydude Long Time Fellow Posts: 510 Threads: 44 Joined: Aug 2007 03/31/2009, 05:12 AM bo198214 Wrote:then the limit can be given as $\lim_{\delta \to 0+} b[4](\delta -2) - \log_b(\delta)=\log_b\left(\frac{\text{sexp}'(0)}{\ln(b)}\right)$ You know, at first I was confused, because this formula implies that $\lim_{z\to -2^{+}}(\text{sexp}(z) - \ln(z+2)) = -2.4327187635454386$ but the Cz page gives $\lim_{z\to 0^{+}}(\text{sexp}(z) - \ln(z+2)) = 0.30685281944005469$ but then I realized the point was different Andrew Robbins andydude Long Time Fellow Posts: 510 Threads: 44 Joined: Aug 2007 04/06/2009, 10:23 PM Gottfried Wrote:Here for base b = sqrt(2) : $ \begin{tabular}{llll} \exp_{\sqrt{2}}^h(1) & = 2 \\ & - 0.6321 u_h \\ & - 0.2253 u_h^2 \\ & - 0.08541 u_h^3+ \cdots \end{tabular}$ (I rewrote this in TeX) I used regular iteration, and I did not get these expansions at all. I got $ \begin{tabular}{llll} \exp_{\sqrt{2}}^h(x+2) & = 2 \\ & + u_h x \\ & + (0.5647 u_h - 0.5647 u_h^2) x^2 \\ & + (0.2296 u_h - 0.6378 u_h^2 + 0.3382 u_h^3) x^3 + \cdots \end{tabular}$ so what this series represents would be regular iteration, evaluated at $x=(-1)$, and expanded about $u_h$. Is that right? How did you get this? Andrew Robbins Gottfried Ultimate Fellow Posts: 889 Threads: 130 Joined: Aug 2007 06/10/2009, 02:09 PM (This post was last modified: 06/12/2009, 09:52 AM by Gottfried.) (04/06/2009, 10:23 PM)andydude Wrote: Gottfried Wrote:Here for base b = sqrt(2) : $ \begin{tabular}{llll} \exp_{\sqrt{2}}^h(1) & = 2 \\ & - 0.6321 u_h \\ & - 0.2253 u_h^2 \\ & - 0.08541 u_h^3+ \cdots \end{tabular}$ (I rewrote this in TeX) I used regular iteration, and I did not get these expansions at all. I got $ \begin{tabular}{llll} \exp_{\sqrt{2}}^h(x+2) & = 2 \\ & + u_h x \\ & + (0.5647 u_h - 0.5647 u_h^2) x^2 \\ & + (0.2296 u_h - 0.6378 u_h^2 + 0.3382 u_h^3) x^3 + \cdots \end{tabular}$ so what this series represents would be regular iteration, evaluated at $x=(-1)$, and expanded about $u_h$. Is that right? How did you get this? Andrew Robbins Hi Andrew - sorry, took a long time to answer. The coefficients occur as sums; your formula contains the x-parameter, maybe if you insert x=-1 we get identity. [update] I've the same coefficients as yours, just evaluated at x=-1;see next post [/update] Let me explain in my matrix/Pari-GP-notation how I got the coefficients: Assume the notation of variables as usual: the variables b,t,u encoding the base-parameters, where for our example t=2, u=log(t) , b=t^(1/t) = sqrt(2). Also let exp_b°h(x) the h'th iteration of exp-function to base b, and dxp_t°h(x) the h'th iteration of the decremented exp-function to base t where we use the identity (if we have a real fixpoint) $\hspace{24} \exp_b^{\circ h}(x) = ( dxp_t^{\circ h} (\frac{x}{t}-1) +1 )*t = t + t* dxp_t^{\circ h} (\frac{x}{t}-1)$ which will be numerically with x=1, t=2 $\hspace{24} \exp_{\sqrt2}^{\circ h}(1) = 2 + 2* dxp_2^{\circ h} (-\frac12)$ The function dxp to base t (=2) gets its coefficients by the triangular Bell-matrix Ut, where we implement the h'th fractional power using diagonalization, denoting the eigenmatrices as W and WInv (=W^-1) Code:´  Ut^h = W * dV(u^h) * WInv and the function is in general finally computed by Code:´  V(y)~ = V(x/2-1)~ * Ut^h Here   y = V(y)[1]  , meaning y is the second element of V(y) and is also y = dxp°h(-1/2) by the construction of the formula. Then    exp_b°h (1) = 2 + 2*y    = 2+ 2*dxp_t°h(-1/2) Keeping the iteration-parameter variable we have, using the eigenmatrices Code:´   V(y)~ = V(x/2-1)~ * W * dV(u^h) * WInv Since we assume x being constant x=1, we can precompute the rowvector S~ Code:´     S~ = V(-1/2)~ *W which implements also a Schröder-function (need not be the principal one) for dxp_t°h(x) at the constant x=-1/2 . This is also the schröder-function for exp_b°h(x) at x=1, as fas as we look only on the coefficients of the 2'nd column of W. So we can write Code:´     V(y)~ = S~ * dV(u^h) * WInv WInv provides the inverse of the schröder-function, and if we extract only the scalar result we can write ( with the notation W[,1], meaning the second column of a marix W ) Code:´    y  = S~ * dV(u^h) * WInv [,1] Here we can interchange the the order of multiplication of the last two factors and precompute the constant coefficients of a powerseries in the resultvector M of Code:´   M~ = S~ * diag(WInv[,1]) and get $\hspace{24} y = M\sim * V(u^h) = \sum_{k=0}^{\infty} m_k * (u^h)^k$ and according to the above Code:´  exp_b°h (1) = 2 + 2*y we have the source of my coefficients in the vector M: Code:´   exp_b°h (1) = 2 + 2*y = 2 + 2* M~ * V(u^h) So the coefficients, which I provided are just the precomputed coefficients in M, which represent the evaluation of the coefficients of the Schröder-functions for dxp, including a fixpointshift of the x-parameter and of the function-value. -------------------------------- In short, omitting matrices: Let C(x) denote a schröder-function for dxp_t(x), c_k the k'th coefficient of its powerseries, D(x) the inverse and d_k the k'th coefficient of its powerseries, for brevity C and D the functions at x=-1/2 and v = u^h, the h'th power of u. Then we can write $ \hspace{48} C = \sum_{j=0}^{\infty} c_j*(-1/2)^j \\ \hspace{48} b\^\^^h = \exp__{\sqrt2}^{\circ h}(1) =2 + \sum_{k=1}^{\infty} 2* C^k * d_k * v^k \\$ and the k'th coefficient in my first mail is just 2* C^k * d_k in the formula above. Gottfried Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 889 Threads: 130 Joined: Aug 2007 06/10/2009, 03:27 PM (04/06/2009, 10:23 PM)andydude Wrote: Gottfried Wrote:Here for base b = sqrt(2) : $ \begin{tabular}{llll} \exp_{\sqrt{2}}^h(1) & = 2 \\ & - 0.6321 u_h \\ & - 0.2253 u_h^2 \\ & - 0.08541 u_h^3+ \cdots \end{tabular}$ (I rewrote this in TeX) I used regular iteration, and I did not get these expansions at all. I got $ \begin{tabular}{llll} \exp_{\sqrt{2}}^h(x+2) & = 2 \\ & + u_h x \\ & + (0.5647 u_h - 0.5647 u_h^2) x^2 \\ & + (0.2296 u_h - 0.6378 u_h^2 + 0.3382 u_h^3) x^3 + \cdots \end{tabular}$ so what this series represents would be regular iteration, evaluated at $x=(-1)$, and expanded about $u_h$. Is that right? How did you get this? Andrew Robbins Hi Andrew, 2'nd note. I just looked at my coefficients. If I do not apply the summation with constant x. I get the same numbers as you gave (just looked at 4 decimals and handful of coefficients). I think we do the same computation except I changed order of summation. Gottfried Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 889 Threads: 130 Joined: Aug 2007 06/11/2009, 06:26 PM (This post was last modified: 06/11/2009, 07:13 PM by Gottfried.) Fun... I now could make use of the upper (repelling) fixpoint with this type of series. Here we have the upper fixpoint t=4, u=log(4) ~ 1.3862943611... for the same base b = sqrt(2), and we can compute the fractional heights for some appropriate initial-value x, say x=5: to get context I quote the previous: Gottfried Wrote:Here for base b = sqrt(2) and for brevity v = u^h: $ \begin{tabular}{llll} \exp_{\sqrt{2}}^h(1) & = 2 \\ & - 0.6321 v \\ & - 0.2253 v^2 \\ & - 0.08541 v^3+ \cdots \end{tabular}$ and again v for u^h and f(h) for the longish exp-expression: $ \begin{tabular}{llll} f(h) = \exp_{\sqrt{2}}^h(5) & = 4 \\ & + ??? v \cdots \end{tabular}$ with different coefficients. (We have to compute the appropriate Ut-matrix and also W-matrix now) For x=5 we have by fixpoint-shift x1 = x/t - 1 = 5/4 - 1 = 0.25 and the evaluation of the schröder-function Code:´   S~ = V(5/4-1)~ * W = V(0.25)~ * W gives, by the summation in each column, a set of series, which converge good with 64 terms(n=64 is my selected vector/matrix-dimension) and give the vector S containing all results. Then the coefficients from WI (which represent the inverse of schröder-function) are multiplied into to get the constant vector M, which has now the coefficients which are independent from u^h: Code:´   M~ = S ~ * diag(WI[,1]) Then the intermediate value y as in the previous msg is again: $\hspace{24} y = M\sim * V(u^h) = \sum_{k=0}^{\infty} m_k * (u^h)^k$ Thus, having the fixpoint t=4 here, we get the new coefficients for the series from the vector M: Code:´  f(h) = 4 + 4* y          = 4 + 4* (M~ * V(u^h)) which can be evaluated for some height h, as long as the occuring series (by M~*V(u^h)) converges or can be Euler-summed. This gives the powerseries: $ \begin{tabular}{llll} f(h) = \exp_{\sqrt{2}}^h(5) & = 4 \\ & + 0.694707147143 v \\ & + 0.216496377971 v^2\\ & + 0.0638276619589 v^3\\ & + 0.0181280769146 v^4 \\ & + 0.00500577162906 v^5 \\ & + 0.00135142475056 v^6 \\ & + 0.000358055753514 v^7 \\ & + 0.0000933533652020 v^8 \\ & + 0.0000240008373227 v^9 \\ & + 0.00000609458207498 v^{10} \\ & + 0.00000153056011763 v^{11} \\ & + 0.000000380551461011 v^{12} \\ & + 0.0000000937615006278 v^{13} \\ & + 0.0000000229095405831 v^{14} \\ & + 0.00000000555487772536 v^{15} \\ & + 0.00000000133735491281 v^{16} \\ & + 0.000000000319852431282 v^{17} \\ & + 7.60281626580E+11 v^{18} \cdots \end{tabular}$ For the heights h=0..-2 in 1/32-steps I get the following values: Code:´   h    |  f(h) = exp_sqrt(2)°h(5)= -----------------------------            0  |  5.00000000000    -1/32  |  4.98555564349    -1/16  |  4.97137811129    -3/32  |  4.95746089099     -1/8  |  4.94379767428    -5/32  |  4.93038234919    -3/16  |  4.91720899253    -7/32  |  4.90427186283     -1/4  |  4.89156539347    -9/32  |  4.87908418618    -5/16  |  4.86682300480   -11/32  |  4.85477676933     -3/8  |  4.84294055021   -13/32  |  4.83130956288    -7/16  |  4.81987916254   -15/32  |  4.80864483915     -1/2  |  4.79760221266   -17/32  |  4.78674702837    -9/16  |  4.77607515259   -19/32  |  4.76558256839     -5/8  |  4.75526537155   -21/32  |  4.74511976670   -11/16  |  4.73514206358   -23/32  |  4.72532867348     -3/4  |  4.71567610582   -25/32  |  4.70618096483   -13/16  |  4.69683994642   -27/32  |  4.68764983512     -7/8  |  4.67860750119   -29/32  |  4.66970989776   -15/16  |  4.66095405821   -31/32  |  4.65233709352       -1  |  4.64385618977   -33/32  |  4.63550860581   -17/16  |  4.62729167087   -35/32  |  4.61920278239     -9/8  |  4.61123940385   -37/32  |  4.60339906273   -19/16  |  4.59567934854   -39/32  |  4.58807791086     -5/4  |  4.58059245755   -41/32  |  4.57322075293   -21/16  |  4.56596061612   -43/32  |  4.55880991933    -11/8  |  4.55176658633   -45/32  |  4.54482859088   -23/16  |  4.53799395524   -47/32  |  4.53126074878     -3/2  |  4.52462708658   -49/32  |  4.51809112807   -25/16  |  4.51165107581   -51/32  |  4.50530517419    -13/8  |  4.49905170827   -53/32  |  4.49288900258   -27/16  |  4.48681542007   -55/32  |  4.48082936094     -7/4  |  4.47492926166   -57/32  |  4.46911359397   -29/16  |  4.46338086382   -59/32  |  4.45772961051    -15/8  |  4.45215840574   -61/32  |  4.44666585274   -31/16  |  4.44125058537   -63/32  |  4.43591126737       -2  |  4.43064659147 For instance, f(-1) should be log(5)/log(b) and we have from the table at h=-1 the value 4.64385618977 which agrees with direct computation log(5)/log(sqrt(2)) = 4.64385618977 , and it should be b^f(-1.5) = f(-0.5), which can be checked easily using values from the table. Don't know yet, whether this has some benefit so far. Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 889 Threads: 130 Joined: Aug 2007 06/11/2009, 08:36 PM (This post was last modified: 06/12/2009, 03:53 AM by Gottfried.) (06/11/2009, 06:26 PM)Gottfried Wrote: Don't know yet, whether this has some benefit so far. It looks, as if we had a discussion of that recently in Upper superexponential I'm excerpting a bit of Henryk's post: (03/29/2009, 11:23 AM)bo198214 Wrote: As it is well-known we have for $b the regular superexponential at the lower fixed point. This can be obtained by computing the Schroeder function at the fixed point $a$ of $F(x)=b^x$. (...) Now the upper regular superexponential $\operatorname{usexp}$ is the one obtained at the upper fixed point of $b^x$. For this function we have however always $\operatorname{usexp}(x)>a$, so the condition $\operatorname{usexp}(0)=1$ can not be met. Instead we normalize it by $\operatorname{usexp}(0)=a+1$, which gives the formula: (*1) $\hspace{48} \operatorname{usexp}_b(t)=a+\chi^{-1}\left(\ln(a)^x \chi(1)\right)$ (...) My construction in the previous post was obviously the same as that above construction (*1) ... Gottfried (I added the comments //... ) Gottfried Wrote:Then we can write $ \hspace{48} C = \sum_{j=0}^{\infty} c_j*(-1/2)^j \hspace{96} \text{//this is Schroeder-function\ }\chi_2(x) \text{\ for \ } 2^x - 1 \text{\ at \ } x=-\frac12} \\ \hspace{48} \exp^{\circ h}_{\sqrt{2}}(1) =2 + \sum_{k=1}^{\infty} 2* C^k * d_k * v^k \hspace{24} // = 2+2*\chi_2^{-1}(u^h*\chi_2(-1/2)) \\$ and the k'th coefficient in my first mail is just 2* C^k * d_k in the formula above. where the fixpoint "a" is simply given as constant 2 and could be generalized to the symbol. The sum-expression describes the inverse of the schröder-function chi^-1 in Henryk's post. The formula for the repelling fixpoint replaces simply 2 by 4 and (1/2-1) by (5/4-1) and uses the adapted schröder-function. So I think it's useful to redirect replies to the other thread... Gottfried Helms, Kassel « Next Oldest | Next Newest »

 Possibly Related Threads… Thread Author Replies Views Last Post Kneser-iteration on n-periodic-points (base say \sqrt(2)) Gottfried 11 8,827 05/05/2021, 04:53 AM Last Post: Gottfried Mathematica program for tetration based on the series with q-binomial coefficients Vladimir Reshetnikov 0 4,909 01/13/2017, 10:51 PM Last Post: Vladimir Reshetnikov Expansion of base-e pentation andydude 13 44,004 07/02/2011, 01:40 AM Last Post: Cherrina_Pixie Single-exp series computation code mike3 0 4,972 04/20/2010, 08:59 PM Last Post: mike3 Computations with the double-exp series mike3 0 4,340 04/20/2010, 07:32 PM Last Post: mike3 intuitive slog base sqrt(2) developed between 2 and 4 bo198214 1 6,682 09/10/2009, 06:47 PM Last Post: bo198214 sqrt(exp) Kouznetsov 15 33,270 12/20/2008, 01:25 PM Last Post: Kouznetsov

Users browsing this thread: 1 Guest(s)