 Extending tetration to base e - Printable Version +- Tetration Forum (https://math.eretrandre.org/tetrationforum) +-- Forum: Tetration and Related Topics (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=1) +--- Forum: Mathematical and General Discussion (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=3) +--- Thread: Extending tetration to base e (/showthread.php?tid=9) Extending tetration to base e - jaydfox - 08/10/2007 See my exact solution for base e^(1/e) for background. Assuming the solution I provided is accepted as unique (i.e., correct), it should provide strength to the claim that the following solution is correct as well. Let's start with the basics. Here, for brevity, assume that eta is e^(1/e). (Eta looks like an "n", but it's effectively a greek letter "e", and given its relationship to the constant e, I thought it would make a good symbol.) Okay, so far, so good. Now, try this one on for size: Interesting... But is this useful? Well, let's go one more: Hopefully it's apparent where I'm going with this. Let y be the mth tetration of e, with m=5 sufficient to exceed machine precision in all practical circumstances (when evaluating delta at m-1=4). Fascinating. But still, is this useful? Well, for this, we need a new function, based on tetration of base eta, but which equals e at negative infinity, and which equals something greater than e at y = 0. Think of it as taking eta^^y just below its asymptote, all the way to infinity and beyond, wrapping around at negative infinity, but just above the asymptote instead of below. I call this new function eta_b-check, where b is a particular base we're interesting in. Here's how it looks, omitting the b for the assumed base e. (The proof that there is a unique solution for eta-check is very similar to the one I gave for the tetration of eta. I can provide it in a separate post if required.) As it turns out, we can even find the exact value of eta_b-check(0) with arbitrary precision, relative to any particular base. And I know that I'm not the first to find this function. At the very least, Peter Walker described its inverse, or something very similar, if I'm reading his paper correctly. So this function is not new. However, I haven't seen the following proof (Peter had something very close, so again, I can't take all the credit, if I can even take any at all.) It turns out, there exists an exponent mu for eta-check such that: I say sufficiently large, but really we're talking about a limit as we go to infinity. But beyond a certain point, mu increases pretty much linearly with y, with extreme precision (this has a very significant interpretation, which I'll get to if someone doesn't beat me to the punch). For y=4 or 5 or so, computer precision is not sufficient to tell the limited value of mu from the "approximate" value. Anyway, here comes the fun part: and For sufficiently large y, this function is effectively exact (it is exact when you take the limit to infinity). Furthermore, notice that I swapped out the integer m for the real exponent y. Finally, notice that if eta-check is unique as I claim it is, then the tetration of base e that I just defined has a very strong claim on uniqueness. In other words, if some other method does not agree, I claim that this function is "correct", and the other function is displacing its exponents by some cyclic function. RE: Extending tetration to base e - jaydfox - 08/10/2007 By the way, that tetration was so easily extended to base e should be a very strong indicator that all bases greater than eta can be solved by this method. The exact solution is very slow to converge, requiring ridiculous iteration counts, and double precision math is out of the question if you want more than a handful of digits. However, I've already found some "helper" functions that reduce the number of iterations necessary, and for values of eta-check just above e, i.e., e(1+delta), we can program iterated exponential functions (base eta) for powers of 2 for the iteration counts: Notice that when you're iterating, you pull the factor of e out, and you don't put it back in, because each time you iterate, you're going to have to pull it out again anyway. The cutoff value k could be determined algorithmically, and the constants a_m,n could be calculated once on startup. Similar helper functions could be written for iterated logarithms. And depending on the accuracy you require, it's not out of the question to have helper functions up to 16 or 32 iterations, more if you've got a good math library. I'm currently trying to write 1, 2, 4, 8, and 16-iteration helper functions for eta^y and log_eta(y), using GMP. My time will be limited this weekend, so I suspect I won't finish that project for a few weeks. An interpolation function for eta-check(-n), for large n, could be used to overcome the need to iterate a ridiculous number of times to get to the point where linear interpolation is accurate. Peter Walker's paper has a function which might be the correct one, or a good second-order approximation anyway. It needs to be transformed, because he was solving for the superlog. A second order approximation can take 5 digits of accuracy up to 15, reducing the iteration count from 10^15 to just 10^5. (I know, "just" 10^5 iterations, is that all?) I get about 11-12 digits of accuracy using an 8000-iteration variant, which works out to about 1 part in 8000^3. I get about an extra digit compared to a 4000-iteration variant. I have the constants calculated in a table, sufficient to go up to 32000 iterations, but I don't have the patience to wait for it. But in theory, that one should give me 13-14 digits of accuracy, almost sufficient for subsequent manipulation with double precision math. Once I have my 16-count iteration functions written, I should be able to push my work out to 20 digits of accuracy. For the interpolation function, I'll most likely end up using polynomial interpolation, with the degree of the polynomial limited by the iteration count. The good news is, with sufficient iterations, you should be able to get good third-order precision, which could extend 6 digits to 24 ("only" a million iterations, but with helper functions, this can be reduce to a few tens of thousands). I'm not sure who needs more than 24 digits accuracy, but if you do, well, it's going to take a few minutes to crunch the numbers. Luckily, we can probably find very precise answers over a very short interval, and use those to figure out the first few derivatives. Even though accuracy might be limited to 24 digits, for example, precision over a short interval of length 0.001 should easily be 27-30 digits. For a large one-time cost, a distributed effort could even get 50-100 digits or more, with each computer calculating just one point. A large collection of points could be used to build a table of, say, all 999 points in the interval -1 to 0 with a spacing of 0.001, plus all 21 points in the interval [-0.00001, -0.00001] with spacing 0.000001. And I'm leaving out that possibility that better helper functions are available to speed up convergence by another few factors. Anyway, lots to think about, but I need to get back to my mad scientist "lab".