08/13/2007, 02:58 PM
PS. Just a sidenote, I understand that () and (slog) are different functions, but if you swap them in the above equations, then you get the formula for iterated exponentials, which I found interesting.
Change of base formula for Tetration

08/13/2007, 02:58 PM
PS. Just a sidenote, I understand that () and (slog) are different functions, but if you swap them in the above equations, then you get the formula for iterated exponentials, which I found interesting.
andydude Wrote:I'm going to rewrite your equation in the form: I think where your simplifications break down is that you're taking an infinite iterated logarithm of an infinite iterated exponential, but your not taking into account that your iterating each exactly the same number of times (edit: aside from the additive superlogarithmic constant). I don't have time this morning to show why this matters, but the math I provided above should have been sufficient to point this out. For any finite n, the formula works, and it gets more and more and more accurate as n goes up, so why do you expect it to suddenly stop working when the limit goes to infinity? andydude Wrote:PS. Just a sidenote, I understand that () and (slog) are different functions, but if you swap them in the above equations, then you get the formula for iterated exponentials, which I found interesting.Actually, the connection is very important. In fact, it's actually the reason that we can solve for . Technically, even if you tetrate an infinite number of times, you can't get above the asymptote. So really, the function is a limiting case of , with k an appropriate factor that I'll define in a later post. I have a rough idea of how to explain it, but I don't want to describe it incorrectly and confuse anyone. Anyway, once you start exponentially iterating values smaller than eta, you can never get above their associated asymptotes. However, if you start at positive infinity (which means taking a limit), and use iterated logarithms, you can work your way back down to the higher asymptote. That's why I think your notation actually works a little better, even though we're essentially describing the same thing. By the way, how much of what I'm explaining in my various posts is already known? Some of it I've seen before, and some of it I've seen in an alternative format that masked the meaning, and some of it seems original, but I can't really be sure. I'm wondering if I'm on to anything publishable. I've never been published, so it would be cool if I were on to something.
~ Jay Daniel Fox
08/13/2007, 07:31 PM
andydude Wrote:Let me make a short unification. The slog is simply a solution to the Abel equation for : which is merely our initial condition written for instead of . It is wellknown that this corresponds to a fractional iteration via Moreover that any fractional iteration must be of this form.
08/13/2007, 07:37 PM
@Jay for publishing you have to provide proofs, thats for sure.
bo198214 Wrote:@Jay for publishing you have to provide proofs, thats for sure. The only item I haven't proven is that the power series for the fractional iteration of e^z1 converges for some radius of convergence greater than 0. Some sources say it only converges for integer iterations, others say it has nonzero radius of convergence for fractional iterations. So either the proof exists somewhere, or the disproof exists somewhere, or everybody's wrong so far. I just need to figure out which the case is. If I can find or provide a proof that fractional iteration of e^z1 converges and is unique, then combined with my change of base formula, I've got "the" unique solution to tetration for all bases greater than eta. I'm also working on a method for bases between 1 and eta. Bases in that range have three partitions, so each solution will need a formula. The tricky ranges are , where the solution oscillates with period 2 (not period 1!) but converges, and , where the solution oscillates with period 2 and does not converge. Other than a sine function as part of generating the oscillations, I don't have much of a good starting point for those bases.
~ Jay Daniel Fox
08/13/2007, 08:23 PM
jaydfox Wrote:The only item I haven't proven is that the power series for the fractional iteration of e^z1 converges for some radius of convergence greater than 0. Some sources say it only converges for integer iterations, others say it has nonzero radius of convergence for fractional iterations. There was also no proof for the convergence of your change of base formula. Though it looks for me that you could make it. For the convergence of I have really strange news, see the corresponding thread (later). bo198214 Wrote:There was also no proof for the convergence of your change of base formula. Though it looks for me that you could make it.Really, I thought it obvious it converges. As n goes to infinity, the epsilon goes to 0 (and quite rapidly at that), making the formula exact in the limiting case. Because the limiting case is exact and the formula converges very fast, the solution is even practical. The proof relies on bases greater than eta, but if one starts with one of the bases greater than eta (e seems the obvious choice), and uses iterated logarithms for the other base (see Andrew Robbins's notation), then one can even get solutions for bases between 1 and eta, though I'm not sure if there is a defensible "unique" definition of the zeroeth iterate. (more on that when I get to bases less than eta).
~ Jay Daniel Fox
08/15/2007, 09:36 AM
jaydfox Wrote:Really, I thought it obvious it converges. As n goes to infinity, the epsilon goes to 0 (and quite rapidly at that), making the formula exact in the limiting case.For publishing this argumentation would not suffice Quote:The proof relies on bases greater than eta, but if one starts with one of the bases greater than eta (e seems the obvious choice), and uses iterated logarithms for the other base (see Andrew Robbins's notation), then one can even get solutions for bases between 1 and eta, though I'm not sure if there is a defensible "unique" definition of the zeroeth iterate. (more on that when I get to bases less than eta).That would be the interesting case, we want to transfrom from a base to a base . For soundness it should also be verified that if (for this case we have a unique solution demanding differentiability at the smaller fixed point of ) the change of base yields the other unique solution for base .
08/15/2007, 10:15 AM
bo198214 Wrote:jaydfox Wrote:Really, I thought it obvious it converges. As n goes to infinity, the epsilon goes to 0 (and quite rapidly at that), making the formula exact in the limiting case.For publishing this argumentation would not suffice Well, most of real and complex analysis would fall apart if limiting cases were not sufficient to provide proofs! To make my point quite plain, I can demonstrate the superlogarithmic constant exists quite easily with bases e and 2. And there you have it, the superlogarithmic constant for conversion from base 2 to base e must be between 1 and 2. Once 2^^(n+2) gets ahead of e^^n by significantly more than a factor of log_2(e), it's all over. The tetration of e never has a chance to catch up. The tetration of 2 was given a head start, and it will always stay way ahead of e. Such a simple demonstration only suffices to show the existence of the superlogarithmic constant. For any two bases, you can define an integer interval within which the constant must lie. But determining its value with any finer precision requires an exact solution for one of the bases.
~ Jay Daniel Fox

« Next Oldest  Next Newest »
