Bummer! - Printable Version +- Tetration Forum ( https://math.eretrandre.org/tetrationforum)+-- Forum: Tetration and Related Topics ( https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=1)+--- Forum: Mathematical and General Discussion ( https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=3)+--- Thread: Bummer! ( /showthread.php?tid=69) |

Bummer! - bo198214 - 10/05/2007
Hey folks, the conjecture about the equality of the 3 methods of tetration is shattered. I received an e-mail of Dan Asimov where he mentions that the continuous iterations of at the lower and upper real fixed points, , differ! He, Dean Hickerson and Richard Schroeppel found this arround 1991, however there is no paper about it. The numerical computations veiled this fact because the differences are in the order of . I reverified this by setting the computation exactness to 100 decimal digits and using the recurrence formula described here: , where is the fixed point of and and . Currently (running since a day) there is a computation in progress where I compute the differences over the interval with exactness of 150 (and internally to 450) decimal digits (lets have a look whether it finishes in my life time ). I will post the graph here when finished. Generally we can assume that it is rather the exception that the regular iterations at two different fixed points are the same functions. Moreover I actually dont know any analytic function except the identity function where this would be the case! So the first lesson is: dont trust naive numerical verifcations. We have to reconsider the equality of our 3 methods and I guess there will show up differences too. But apart from that we have also hard non-numerical consequences: There can not be *any* analytic function with that is analytic in . At least one of the fixed points is a singularity in the sense that does not exist (however of course ). It applies particularly to Andrew's and Gottfried's solution. And of course that probably is true for any base . I can just demonstrate it with the continuous iteration of the function . The effect is similar to however the differences already occur at exactness so the computations are not that expensive. Lets have a look at the graph of the function: [attachment=88] It has two fixed points, one at and one at with the slope and respectively. So the condition of positive slope of regular iteration is satisfied. By the previously given formula we compute the iterative square root at both fixed points and their difference: [attachment=89] We see this oscillating behaviour, which implies if the first derivative for one function at a fixed point exists it does not exist for the other. RE: Bummer! - Gottfried - 10/05/2007
Yes, this seems to be a bummer, true. I'd like to see more details to be able to consider the consequences for my matrix-approach. Concerning your example: what about a cubic transformation? And, if one fix-point is repelling, isn't that a consequence, that the derivative at this point must be somehow exotic, say singular? Hmm... I'll check it with my analytical composition of the eigenvectors. Usually I insert 2 and ln(2) as a parameter for the b=sqrt(2) problem. I can try, which terms occur, if I insert 4 = 2 ln(2) instead. Gottfried RE: Bummer! - bo198214 - 10/05/2007
Gottfried Wrote:And, if one fix-point is repelling, isn't that a consequence, that the derivative at this point must be somehow exotic, say singular? Hmm... Not really (however note that the previous iteration formula works only for (repelling fixedpoint) otherwise you simply regularly iterate the inverse function, which has an attracting fixed point, and afterwards take again the inverse). And if you naively plot the graph you think everything is fine: [attachment=90] (yellow: , green: looks equal regardless at which fixed point devloped, red: ) RE: Bummer! - bo198214 - 10/06/2007
Here the promised graph of the difference of the (regular) iterational square root taken at the fixed point 2 and taken at the fixed point 4 of . Unfortunately 150 digits took too long, instead I now computed it with at least 50 digits precision (which took more then 7 hours). [attachment=91] I think this pattern will show up for every analytic function with two neighbored fixed point. The question still remains to find an analytic function where the difference is 0. RE: Bummer! - bo198214 - 10/06/2007
Even if we choose a function that is completely symmetric at [attachment=93] The regular iterations at both fixed points dont coincide. They have the difference: [attachment=94] Details: If we have a function that is symmetric at the y-Axis then we can make a function out of it, which is symmetric at the straight line by the following procedure: . This roughly corresponds to rotating the function graph by 45 degrees anticlockwise. The property of being symmetric at can be expressed by . Directly translated it means mirror the function at the y-Axis then mirror it at (function inversion) and then mirror it at the x-Axis. The result of these three mirrorings is a mirroring at and this should not change anything. With some arithmetic you can indeed verify that . The current graph resulted from letting and presents with the fixed points -1 and 1. RE: Bummer! - jaydfox - 10/07/2007
To be honest, I'm not surprised. In working with Andrew's slog, it appears to be based entirely on the primary fixed points and "images" of it (e.g., a, a+2*pi*i, ln(a+2*pi*i), ln(a+2*pi*i) + 2*pi*i, etc.). I just haven't figured out where the other fixed points fit into the slog, and yet continuous iteration from them should be possible. They simply must yield a different solution. RE: Bummer! - bo198214 - 10/07/2007
jaydfox Wrote:To be honest, I'm not surprised. In working with Andrew's slog, it appears to be based entirely on the primary fixed points and "images" of it (e.g., a, a+2*pi*i, ln(a+2*pi*i), ln(a+2*pi*i) + 2*pi*i, etc.). Well, Andrew's slog does *not* correspond to the regular iteration at a primary non-real fixed point (as this yields complex values). It is still not verified whether it corresponds to the regular iteration at the lower real fixed point (btw. do you call the both real fixed points for "primary" too? Because the primary fixed points converge to for ). Where with "correspond" I mean that: where the right side is the given regular iteration at a fixed point. So its not really clear for me about what you are not surprised RE: Bummer! - jaydfox - 10/07/2007
Ah, sorry, I was talking about Andrew's slog with base e. All the fixed points are complex, of course, but his slog seems to correspond to the primary fixed points at 0.318 +/- 1.337i. And as I think I've explained in a couple other threads, the reason that continuous iteration from these two fixed points yields real values for real inputs is due to using both fixed points, not just one or the other. Since they are conjugates, the non-real parts cancel out. I haven't tried it with the sexp function yet, but it seems pretty clear from the slog. RE: Bummer! - jaydfox - 10/15/2007
This problem is something I could easily see occupying us for a while, as far as trying to really, deeply understand it all. While I wasn't surprised, I suppose I should phrase it more as I wasn't "shocked". For bases between 1 and eta, I would have the least hoped for the solutions to the upper and lower fixed points to be consistent with each other. But given: 1) My change of base formula does not give identical results with "good old fashioned" continuous iteration from fixed points, and 2) Andrew's slog, at least for base e, appears to be loosely based on continuous iteration only from the primary fixed points (and hence the other fixed points would almost certainly give different results), it does not come as a total shock that continuous iteration from two different real fixed points of a base between 1 and eta gives two different results. I'm quite frustrated by this, because it adds to the problem of defining a unique solution. Yes, we already knew that we could take "the" solution and distort it by applying a cyclic, infinitely differentiable transform to the input. But it would at least have been nice if there were one solution that stood out as the obviously correct solution for a given base. So far, the only base I've seen this be true for is base eta, with parabolic iteration, which has essentially been solved for years if not decades already. Andrew's slog does appear to yield the very nice property of "total monotonicity" for base e, but given how much accuracy was needed to show differences for the upper and lower fixed points of base sqrt(2), I'm tempted to go back with my newer, far more accurate power series for the slog and recalculate the first few hundred derivatives, to make sure the property still appears to hold. And getting back to the line of inquiry that started this thread, all of this begs another question: which of the two fixed points for base sqrt(2) gives the "correct" solution? Or is it something else, perhaps roughly between the two? Or is there no definitively "correct" solution, just a collection of solutions which satisfy basic properties? RE: Bummer! - bo198214 - 11/02/2007
jaydfox Wrote:And getting back to the line of inquiry that started this thread, all of this begs another question: which of the two fixed points for base sqrt(2) gives the "correct" solution? Or is it something else, perhaps roughly between the two? Or is there no definitively "correct" solution, just a collection of solutions which satisfy basic properties? If I was asked, we need the lower fixed point, because only this is reachable by . However the real dilemma is as I already pointed out, that every other (analytic) iteration than the regular iteration at the lower and the regular iteration at the upper fixed point must be singular at both fixed points. And for each analytic iteration one of the fixed points is a singularity. In this line I would be really interested in the difference of Andrew's slog with the regular Abel function at the lower fixed point. The formula for the latter, derived from here, is: , where is the lower fixed point of . Can you make a comparison of with Andrew's computed by your super sophisticated algorithm and post a graph of the difference in the range say [-1,1.9] somewhere?! If the difference turns into a smooth curve starting from some precision then we know they are different, if the result is at any precision rather a random curve this would favour the equality of both solutions. |