• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 Approximation method for super square root Ztolk Junior Fellow Posts: 21 Threads: 4 Joined: Mar 2010 03/23/2010, 01:09 AM (This post was last modified: 03/23/2010, 01:10 AM by Ztolk.) I came up with this method for calculating x where x^x=y. It's similar to a method for calculating square roots. The procedure is as follows: 1. Pick the number y that you would like to find the super root of. 2. Find the self-power that is closest to and less than y. We call this t^t. For example, if y=4000 then t=5 and t^t=3125. 3. Calculate the difference between y and t^t. So for y=4000, t=5, we get 875. 4. Divide that difference by the interval between the self-power below y and the one above it. So for y=4000, this would be (6^6-5^5)=46656-3125=43531. 5. Add this to t. So with our example the result is 5+875/43531 which is about 5.0201. The actual value is 5.094, so the approximation is 1.5% off the actual value. The approximation generally gets more accurate as y and t get higher. Within the regime of a specific t, the most inaccurate value will occur when y is about 27.4% through the interval (example, for t=5, the approximation is most inaccurate at y=11927+3125=15092). I haven't figured out why this is. The accuracy at this point decreases with increasing t. The integer for which this yields the least accurate value is 2, at about 85%. This method is much more accurate than a first order Taylor approximation of the actual solution. I haven't yet been able to apply this to higher order tetrations. bo198214 Administrator Posts: 1,395 Threads: 91 Joined: Aug 2007 03/23/2010, 10:54 AM To compute the inverse function of a strictly increasing function $f$ a method that always works is bisection. perhaps you start with an integer number $t_0$ as you described. Then you know the real value $t$ such that $f(t)=y$ must lie in the interval $(t_0,t_0+1)$, set $u_0=t+1$. Next you divide the interval $(t_0,u_0)$ into two halfes by $w_0=\frac{t_0+u_0}{2}$, and you know that $t$ must either be in the left half $(t_0,w_0)$ or in the right half $(w_0,u_0)$; in the first case must $f(t_0) and in the second case $f(w_0). You choose the new interval $(t_1,u_1)$ accordingly. And do again bisection on it. By repetition of bisection you can compute the $t$ to arbitrary precision (in the above argumentation I assumed that the solution is never on the boundary of the interval, in which case one can abort the bisection, having found the solution). For a more concise description see wikipedia. There are also other root-finding algortithms, like Newton method, etc. Ztolk Junior Fellow Posts: 21 Threads: 4 Joined: Mar 2010 03/23/2010, 02:33 PM (03/23/2010, 10:54 AM)bo198214 Wrote: To compute the inverse function of a strictly increasing function $f$ a method that always works is bisection. perhaps you start with an integer number $t_0$ as you described. Then you know the real value $t$ such that $f(t)=y$ must lie in the interval $(t_0,t_0+1)$, set $u_0=t+1$. Next you divide the interval $(t_0,u_0)$ into two halfes by $w_0=\frac{t_0+u_0}{2}$, and you know that $t$ must either be in the left half $(t_0,w_0)$ or in the right half $(w_0,u_0)$; in the first case must $f(t_0) and in the second case $f(w_0). You choose the new interval $(t_1,u_1)$ accordingly. And do again bisection on it. By repetition of bisection you can compute the $t$ to arbitrary precision (in the above argumentation I assumed that the solution is never on the boundary of the interval, in which case one can abort the bisection, having found the solution). For a more concise description see wikipedia. There are also other root-finding algortithms, like Newton method, etc. Yeah it's not the most accurate method, but it's done in one step without repetition or recursion, so it would be good more mental math enthusiasts. « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post tommy beta method tommy1729 0 167 12/09/2021, 11:48 PM Last Post: tommy1729 Tommy's Gaussian method. tommy1729 24 5,219 11/11/2021, 12:58 AM Last Post: JmsNxn The Generalized Gaussian Method (GGM) tommy1729 2 619 10/28/2021, 12:07 PM Last Post: tommy1729 Arguments for the beta method not being Kneser's method JmsNxn 54 9,215 10/23/2021, 03:13 AM Last Post: sheldonison tommy's singularity theorem and connection to kneser and gaussian method tommy1729 2 655 09/20/2021, 04:29 AM Last Post: JmsNxn Why the beta-method is non-zero in the upper half plane JmsNxn 0 453 09/01/2021, 01:57 AM Last Post: JmsNxn Improved infinite composition method tommy1729 5 1,532 07/10/2021, 04:07 AM Last Post: JmsNxn A different approach to the base-change method JmsNxn 0 835 03/17/2021, 11:15 PM Last Post: JmsNxn A support for Andy's (P.Walker's) slog-matrix-method Gottfried 4 5,026 03/08/2021, 07:13 PM Last Post: JmsNxn Doubts on the domains of Nixon's method. MphLee 1 1,141 03/02/2021, 10:43 PM Last Post: JmsNxn

Users browsing this thread: 1 Guest(s)