08/16/2007, 09:32 PM
jaydfox Wrote:Posts are missing.
Which one? I simply moved some posts from the parabolic iteration thread (that didnt belong there) to here.
Change of base formula for Tetration

08/16/2007, 09:32 PM
jaydfox Wrote:Posts are missing. Which one? I simply moved some posts from the parabolic iteration thread (that didnt belong there) to here.
08/16/2007, 09:47 PM
bo198214 Wrote:Sorry Jay we dont speak about the same thing.Well, yeah, if mu_b(a) >= 2, then you can't solve for a^^(mu_b(a)). I still don't see why this is a problem. For a=3.0, b=1.5, the constant mu_b(a) is going to be something in the neighborhood of 10. So, yeah, if you try to set x=10 or thereabouts, it's not going to converge properly.
~ Jay Daniel Fox
08/16/2007, 09:49 PM
bo198214 Wrote:jaydfox Wrote:Posts are missing. Hmm, it seems to be working now. For a while there, it was only showing one page, not the four pages we're currently up to. Not sure why, but it's working now. False alarm.
~ Jay Daniel Fox
08/16/2007, 09:53 PM
jaydfox Wrote:bo198214 Wrote:jaydfox Wrote:Posts are missing. I saw missing pages too. Seems to be some kind of caching problem. If you find a missing post via search, the pages seems to reappear ... Ok, but back to the topic. jaydfox Wrote:For a=3.0, b=1.5, the constant mu_b(a) is going to be something in the neighborhood of 10. So, yeah, if you try to set x=10 or thereabouts, it's not going to converge properly. Ok, this was a misreading of mine, I thought you sayed that in this case. Ok it seems to converge for too, though there is no proof. So what about the case then? I mean this is what we want: and .
Okay, I've turned my attention back to my solution, as I'd like to get the library (in SAGE) completed. The key to making use of the change of base formula is being able to exponentiate very large numbers, while still maintaining a reasonable amount of "precision".
Eventually, the best way I could find was to do all my math on a doublelogarithmic scale. The reason this is the best scale for doing such large operations can be demonstrated easily. Hence, once we've converted any number into its doublelogarithmic equivalent, we can exponentiate one base by exponentiating in the master base and adding a constant. Because we're working with doublelogarithms, we can work with very large numbers with full machine precision, without having to have overly large exponents. This is useful for iterating exponentiation in base a. Essentially, once is greater than the number of bits of machine precision we're using, there's no point in further exponentiation, because addition will underflow and the result won't be affected. In other words, at that point, exponentiation in base a and base b is indistinguishable at machine precision (if you haven't understood my change of base formula before now, this should be an "Aha!" moment). Note that if we're attempting to do a change of base from base a to base b, then we could use a doublelogarithmic scale in a third base c. Base e is the simplest base to work with, for two reasons: first, logarithms and exponentiations are all using base e under the hood anyway, and second, the double natural logarithm of my cheta function is the iteration function of the decremented natural exponentiation which we've discussed elsewhere. However, situations may arise where a different base c is advisable.
~ Jay Daniel Fox
I suppose it goes without mentioning that you simply reverse the process to take logarithms of very large numbers.
which entails: Here we see the tools to iteratively exponentiate in base a, then iteratively take logarithms in base b.
~ Jay Daniel Fox
jaydfox Wrote:Essentially, once is greater than the number of bits of machine precision we're using, there's no point in further exponentiation, because addition will underflow and the result won't be affected.I should point out that we need to add an absolute value to the equation. Also, it's more useful from an implementation standpoint to solve for z in . By solving for z, we can find a "cap", a value below which we continue exponentiating. Once we've exceeded the cap, there's no point to exponentiate further. Having a cap allows us to perform a simple comparison (internally, a subtraction), rather than exponentiating and then testing if we've exceeded precision. Knowing the precision in bits n, and the bases a and b, we can set a cap for our iterative exponentiations. Once we pass the cap, we begin doing iterated logarithms in the other base (remember, all this is in the context of performing a change of base using my formula). With these tools in place, we can solve a change of base. The only hole I've identified in the formula is that it is only provably correct for integer iteration counts. In between, it gives us infinitely differentiably results which satisfy the iterative exponential property, but there's no guarantee that it's "the" correct solution for fractional iterations. Given the simplicity and the direct correlation between the superlogarithmic "constant" and the logarithmic constant, I had been lulled into assuming that if it was correct for all integer iterations, it would be correct for fractional iterations as well. To assume otherwise would lead to cyclic (wavy) curves if you graphed , when with my formula you would get an asymptotically straight line. It was certainly unexpected to me that such a graph would be wavy, as it would imply that tetration "knows" where it is. To put this into perspective, think about the iterated multiplication formula (you know, exponentation). Let's say that we know that 2^4 equals 4^2 and 2^6 equals 4^3. In fact, let's say that for all integers k, we know that 2^2k = 4^k. Wouldn't you expect 2^3 to equal 4^1.5? That's essentially the basis for my initial confidence that my change of base formula was correct. I had found that it was correct for all integer tetrations, so why should I have the slightest concern that it wouldn't be correct for fractional tetrations? It would be as absurd as doubting that 2^3 equals 4^1.5. And yet, Andrew's solution does indeed show this very problem. It behaves as though 2^3 equaled 4^1.487 or something like that. Sure, the error is small, but it's also unexpected. Barring a really good reason, I'd be perfectly fine saying that Andrew's solution was in error, not mine. And yet the positive/convex nature of the odd derivatives of his solution is so beautiful as to make all my doubts melt away. I cannot fathom that Andrew's solution is wrong. Which means that my change of base formula is on the right track, but for whatever reason it requires a cyclic shifting constant, a constant which "knows" the underlying exponent. It'd be like saying that , where log_2(4,x) is no longer a constant, but a function of x that is cyclic though very nearly constant. Absurd! But if I had to choose between this absurdity and denying the beauty of Andrew's solution, I'll take the absurdity. The superlogarithmic constant would appear to be a function of the underlying tetrational exponent, cyclic though very nearly constant. Of course, further proof is needed. Beauty is fine and dandy, but my solution had a beauty of its own, and I consider it now wrong. Beauty alone cannot prove Andrew's solution either. We must find some underlying reason why having all the odd derivatives be convex is a desirable property, besides the fact that there is (almost certainly) only one such solution per base. Uniqueness alone is insufficient, because my solution is unique in its own way and based on "the" unique solution for base eta.
~ Jay Daniel Fox
08/31/2007, 09:03 AM
jaydfox Wrote:To put this into perspective, think about the iterated multiplication formula (you know, exponentation). Let's say that we know that 2^4 equals 4^2 and 2^6 equals 4^3. In fact, let's say that for all integers k, we know that 2^2k = 4^k. Though I did not understand everything you wrote, this is a base problem of tetration (imposed by rightbracketing). For most values is: particularely we cannot define the nth super root by because . Quote:Barring a really good reason, I'd be perfectly fine saying that Andrew's solution was in error, not mine. And yet the positive/convex nature of the odd derivatives of his solution is so beautiful as to make all my doubts melt away. I cannot fathom that Andrew's solution is wrong.Attributes like "wrong" or "right" are completely inappropriate here. In the realm of mathematics we assign a "right" if we can prove it and a "wrong" if we can disprove it. Of course much research is a pursue of beauty but this is in the eye of the beholder. I would leave it there. Quote:It'd be like saying that , where log_2(4,x) is no longer a constant, but a function of x that is cyclic though very nearly constant. Absurd! I mean the interrelation between th superroot and superpowers for real t, was not yet considered on this forum. How much differs from the th superroot ? Quote:We must find some underlying reason why having all the odd derivatives be convex is a desirable property, besides the fact that there is (almost certainly) only one such solution per base. Uniqueness alone is insufficient, because my solution is unique in its own way and based on "the" unique solution for base eta.As far as I know we couldnt prove any uniqueness conditions yet.
05/03/2009, 10:20 PM
(08/31/2007, 03:51 AM)jaydfox Wrote: Note that if we're attempting to do a change of base from base a to base b, then we could use a doublelogarithmic scale in a third base c. Base e is the simplest base to work with, for two reasons: first, logarithms and exponentiations are all using base e under the hood anyway, and second, the double natural logarithm of my cheta function is the iteration function of the decremented natural exponentiation which we've discussed elsewhere. ??? how does change of base formula for tetration and exp(z) 1 relate ??? i would recommend  in case this is important and proved  that more attention is given to it on the forum and/or FAQ 
« Next Oldest  Next Newest »
