• 2 Vote(s) - 3 Average
• 1
• 2
• 3
• 4
• 5
 Andrew Robbins' Tetration Extension jaydfox Long Time Fellow Posts: 440 Threads: 31 Joined: Aug 2007 11/12/2007, 09:14 AM Base 1 and base infinity? I'm not quite sure I follow... ~ Jay Daniel Fox andydude Long Time Fellow Posts: 509 Threads: 44 Joined: Aug 2007 11/12/2007, 09:56 AM The best way to illustrate it is through this graph: where the thin dotted line is tetration base-1, the solid line is tetration base-e, and the thick dotted line is tetration base-infinity. Neither of these are very interesting (because they are straight lines), and base-infinity isn't really solvable, but it doesn't matter because you can take limits. In the limit as the base goes to 1 the curve gets closer and closer to the thin dotted line, whereas in the limit as the base goes to infinity, the curve gets closer and closer to the thick dotted line, so these aren't really functions but asymptotes of tetration! Some of these trends can be seen on one of my graphs on my website. However, I didn't come to this conclusion by looking at graphs; I have proof. So the super-logarithms of the previously mentioned bases would be the reflection of those asymptotes across the y=x line. Andrew Robbins bo198214 Administrator Posts: 1,395 Threads: 91 Joined: Aug 2007 11/12/2007, 08:05 PM Quote:I have proof. So what do you actually prove? andydude Long Time Fellow Posts: 509 Threads: 44 Joined: Aug 2007 11/13/2007, 12:16 AM (This post was last modified: 11/13/2007, 12:24 AM by andydude.) bo198214 Wrote:So what do you actually prove? That: $\lim_{b \rightarrow 1} \text{slog}_b(z)_n = z^n - 1$ (expanded about z=0) and: $\lim_{b \rightarrow \infty} \text{slog}_b(z)_n = -(1-z)^n$ (expanded about z=0), which means in the limit: $\text{slog}_1(z) = \lim_{n \rightarrow \infty} z^n - 1 = \begin{cases} -1 & \text{ if } 0 \le z < 1 \\ 0 & \text{ if } z = 1 \end{cases}$ and: $\text{slog}_{\infty}(z) = \lim_{n \rightarrow \infty} -(1-z)^n = \begin{cases} -1 & \text{ if } z = 0 \\ 0 & \text{ if } 0 < z \le 1 \end{cases}$ One of the nice things about having an exact form for all approximations is that the approximations are invertible (so we can find tetration as well) but the limit to infinity makes it discontinuous, which is not invertible. Andrew Robbins bo198214 Administrator Posts: 1,395 Threads: 91 Joined: Aug 2007 11/13/2007, 10:21 AM andydude Wrote:That: $\lim_{b \rightarrow 1} \text{slog}_b(z)_n = z^n - 1$ (expanded about z=0) and: $\lim_{b \rightarrow \infty} \text{slog}_b(z)_n = -(1-z)^n$ (expanded about z=0), Ah, ok. But you still assume that the coefficients of $\text{slog}_b$ converge. I asked because it sounded in the beginning as if you had a proof for the convergence of the coefficients for base $1$ and $\infty$ which would be in itself somehow strange statement. andydude Long Time Fellow Posts: 509 Threads: 44 Joined: Aug 2007 11/13/2007, 05:45 PM Indeed, that would be strange. I'm not assuming convergence, but I am assuming that: convergence of the finite series (approximations with coefficient approximations as well) to the infinite series, or in other words: $\lim_{n\rightarrow\infty} \text{slog}_b(z)_n$ where $\text{slog}_b(z)_n = \sum_{k=0}^{n}s_{nk}z^k$ or in other words: $\lim_{n\rightarrow\infty}\sum_{k=0}^{n} s_{n k}z^k$, and convergence of the infinite series (whose coefficients must also converge), or in other words: $\lim_{n\rightarrow\infty}\sum_{k=0}^{n} s_{\infty k}z^k$ are equivalent, but this may be an error as well. What I can prove is that for the nth approximation of the super-log (the finite series): As $b \rightarrow 1$, $s_{nk} \rightarrow (-1, 0, 0, \cdots, 0, 0, 1)$ with n+1 entries. As $b \rightarrow \infty$, $s_{nk} \rightarrow (-1)^{k-1} \left({n \atop k}\right)$ with k = 0 .. n. From this it is easy to show $\sum_{k=0}^{n} (-1)^{k-1} \left({n \atop k}\right) z^k = -(1-z)^n$ Andrew Robbins Gottfried Ultimate Fellow Posts: 789 Threads: 121 Joined: Aug 2007 03/17/2008, 07:52 AM (This post was last modified: 03/17/2008, 05:32 PM by Gottfried.) Hi - I reread this thread yesterday. Now I tried a matrix-version of the slog - I'd now like to see, if the two methods agree. In short: I suppose most matrices as known, as a reminder I only recall that the matrix Ut performs the decremented iterated exponentiation Ut(x) to base t here: Code:´ V(x)~ * Ut   = V(y)~ where y=t^x-1 = Ut(x) V(x)~ * Ut^h = V(y)~ where y= Ut°h(x) ----------- Now to ask for the superlog is to ask for the height h, given y and x=1. This can surprisingly easy be solved using the known eigenmatrices Z of Ut let u=log(t) then Code:´  Ut   = Z * dV(u) * Z^-1 then also Code:´  Ut^h = Z * dV(u^h) * Z^-1 Then the equation Code:´ V(x)~ * Ut^h = V(y)~ can be rewritten as Code:´    V(x)~ * (Z * dV(u^h) * Z^-1) = V(y)~   (V(x)~ * Z) * dV(u^h) * Z^-1  = V(y)~and it follows that Code:´ (V(x)~ * Z) * dV(u^h) = (V(y)~ * Z) -------------------------------------------- Since we need only the second column of the evaluated parentheses, let's denote the entries of the r'th row of the second column of Z as z_r Then define a function Code:´ g(x) = z0 + z1*x + z2*x^2 + z3*x^3 + ...and we have Code:´ g(x)*u^h = g(y)and Code:´ u^h = g(y)/g(x) h = log(g(y)/g(x)) / log(u)follows. If g(x) diverges conditionally, it may still be Euler-summable (see **) For base t=2 the sequence of z_k diverges with a small rate, and seem all to be positive. If x is negative, then this is surely Euler-summable, if |x|<1/2 then it is even convergent. Example-terms z_r for t=2 Code:´ z_r= [0, 1.0000000, 1.1294457, 1.1985847, 1.2474591, 1.2856301, 1.3170719, 1.3439053, ... ]----------------------------------------- By the definition b=t^(1/t) we may use this also for the tetration-function (or better "iterated exponential" in Andrew's wording) of base b, since Code:´ Tb(x) = Ut(x')"   where x'=x/t-1 and x"=(x+1)*tand the eigenvalues of Tb^h are the same as of Ut^h (namely = dV(u^h)). (but remember, that this use of fixpoint-shift gives varying results dependent of the choice of the fixpoint - however small the differences may be, according to our current state of discussion) So Code:´ for Ut°h(x) = y the height-function hghU() gives Code:´ h = hghU_t(y) =  log( (g(y)/g(1)) / log(log(t)) and for Code:´  Tb°h(x) = y the height-function hghT() gives Code:´ h = hghT_b(y) = slog_b(y)            =  log( (g(y')/g(1')) / log(u)            =  log( (g(y/t-1)/g(1/t-1)) / log(log(t))The other good news are, 1) that I have also an extremely simple recursive eigensystem-solver for Ut, which needs only about 3-7 seconds for 96x96-matrices (if the Stirling-numbers are precomputed) depending on float-precision and 2) we need only the second column for this computation and the algorithm can thus be much reduced. Gottfried ------------------------------------------------------------- (**) the Euler-summation adds coefficients e_k of weight to each term in g(x), so the Euler-summed variant eg(x) of the dim-truncated powerseries is then eg(x) = e0*z0 + e1*z1*x + e2*z2*x^2 + e3*z3*x^3 + ... + e_dim*z_dim*x^dim where the e_k have to be determined by a given size of matrix-truncation and a given appropriate Euler-order. Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 789 Threads: 121 Joined: Aug 2007 03/17/2008, 06:09 PM Gottfried Wrote:and for Code:´  Tb°h(x) = y the height-function hghT() gives Code:´ h = hghT_b(y) = slog_b(y)            =  log( (g(y')/g(1')) / log(u)            =  log( (g(y/t-1)/g(1/t-1)) / log(log(t))Actually this means, the hgh-functions compute the height-differences; call lgg(x)= log(g(x))/log(u) then more generally hghT_b(x1,x0) = lgg(x1') - lgg(x0') hghU_t(x1,x0) = lgg(x1) - lgg(x0) and hghT_b(x) = lgg(x') - lgg(1') and hghU_t(x) = lgg(x) - lgg(1) may be taken as short notations for a default case. (Remember, that the function g and lg are dependent on the base-parameters b and/or t) Gottfried Gottfried Helms, Kassel tommy1729 Ultimate Fellow Posts: 1,493 Threads: 356 Joined: Feb 2009 06/26/2009, 10:51 PM (08/19/2007, 09:50 AM)bo198214 Wrote: I just verified numerically that the superlog critical function (originally defined on $(0,1)$) for base $e$ satisfies $t(x)=t(\log(x))+1$ for all $x\in(1,2)$. So it is quite sure that the piecewise defined slog is also analytic. Congratulation Andrew! (However once someone has to prove this rigorously and also compute the convergence radius.) as for the radius of convergence : let A be the smallest fixpoint => b^A = A then ( andrew's ! ) slog(z) with base b should satisfy : slog(z) = slog(b^z) - 1 => slog(A) = slog(b^A) - 1 => slog(A) = slog(A) - 1 => abs ( slog(A) ) = oo so the radius should be smaller or equal to abs(A) maybe i missed it , but i didnt see that mentioned. also this makes me doubt - especially considering that for every base b slog should 'also' ( together with the oo value at the fixed point A mentioned above ) have a period ( thus abs ( slog(A + period) ) = oo too ! ) - , however thats just an emotion and no math of course ... ( btw the video link mentioned in this thread doesnt work for me bo , maybe it isnt online anymore ? ) bo198214 Administrator Posts: 1,395 Threads: 91 Joined: Aug 2007 06/27/2009, 09:39 AM (06/26/2009, 10:51 PM)tommy1729 Wrote: as for the radius of convergence : let A be the smallest fixpoint => b^A = A then ( andrew's ! ) slog(z) with base b should satisfy : slog(z) = slog(b^z) - 1 => slog(A) = slog(b^A) - 1 => slog(A) = slog(A) - 1 => abs ( slog(A) ) = oo so the radius should be smaller or equal to abs(A)Its not only valid for Andrew's slog but for every slog and also not only for the smallest but for every fixed point. However not completely: One can not expect the slog to satisfy slog(e^z)=slog(z)+1 *everywhere*. Its a bit like with the logarithm, it does not satisfy log(ab)=log(a)+log(b) *everywhere*. What we however can say is that log(ab)=log(a)+log(b) *up to branches*. I.e. for every occuring log in the equation there is a suitable branch such that the equation holds. The same can be said about the slog equation. So if we can show that Andrew's slog satisfies slog(e^z)=slog(z)+1 e.g. for $z,e^z\in \{\zeta: |\zeta| <|A|\}$ then it must have a singularity at A. Quote:also this makes me doubt - especially considering that for every base b slog should 'also' ( together with the oo value at the fixed point A mentioned above ) have a period ( thus abs ( slog(A + period) ) = oo too ! ) - , however thats just an emotion and no math of course ... I just showed in some post before but dont remember which one: $\operatorname{slog}(x+\frac{2\pi i}{\ln(b)}) = \operatorname{slog}(\exp_b(x+\frac{2\pi i}{\ln(b)}))-1 = \operatorname{slog}(x)$ again up to branches. Quote:( btw the video link mentioned in this thread doesnt work for me bo , maybe it isnt online anymore ? ) well it seems to not exist anymore. It didnt give concrete solutions to our problems, it was just interesting that others also deal with the solution of infinite equation systems and approximation via finite equation systems. « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post Ueda - Extension of tetration to real and complex heights MphLee 2 342 12/03/2021, 01:23 AM Last Post: JmsNxn On extension to "other" iteration roots Leo.W 7 1,155 09/29/2021, 04:12 PM Last Post: Leo.W Possible continuous extension of tetration to the reals Dasedes 0 2,841 10/10/2016, 04:57 AM Last Post: Dasedes Non-trivial extension of max(n,1)-1 to the reals and its iteration. MphLee 3 7,137 05/17/2014, 07:10 PM Last Post: MphLee extension of the Ackermann function to operators less than addition JmsNxn 2 6,866 11/06/2011, 08:06 PM Last Post: JmsNxn Tetration Extension to Real Heights chobe 3 9,956 05/15/2010, 01:39 AM Last Post: bo198214 Tetration extension for bases between 1 and eta dantheman163 16 32,044 12/19/2009, 10:55 AM Last Post: bo198214 Extension of tetration to other branches mike3 15 34,109 10/28/2009, 07:42 AM Last Post: bo198214 andrew slog tommy1729 1 5,492 06/17/2009, 06:37 PM Last Post: bo198214 Dmitrii Kouznetsov's Tetration Extension andydude 38 64,405 11/20/2008, 01:31 AM Last Post: Kouznetsov

Users browsing this thread: 1 Guest(s)