• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 Continuous iteration from fixed points of base e jaydfox Long Time Fellow    Posts: 440 Threads: 31 Joined: Aug 2007 09/20/2007, 05:46 AM Start with the two primary fixed points of base e: Now, let's perform hyperbolic iteration from both of them. To see this better, we have to look at the slog function. If hyperbolic iteration at a fixed point looks like exponentiation, then the slog function in the immediate vicinity of the fixed point will look like a logarithm. The base of the logarithm is the multiplicative constant of the distance from the fixed point of two consecutive integer iterates. For base e, this distance, of course, is the fixed point itself. Therefore, to a first approximation, the slog function should behave like , where a_0 is the primary fixed point, and the overline indicates the complex conjugate. Notice that these two functions are complex conjugates of each other for real z. Therefore, the imaginary parts will cancel, giving a real-valued function for real inputs. This is the key to understanding how continuous iteration from complex fixed points will nonetheless yield a real-valued function for real inputs. Notice that far from the fixed points, this function will behave very little like the true slog function. But in the vicinity of either, it's should be a very good approximation. Adding in the other fixed points seems logical, and yields a terribly surprising result: rational coefficients of the power series! This is dependent on how one calculates the logarithms at the other fixed points (i.e., each logarithm has to be calculated in the branch of the particular fixed point). The power series using the sum of the logarithmic functions at all the fixed points has the following first few coefficients: Here, C_0 is rather arbitrary, and could just as well be -1. Note that besides the meagre pattern I've already extracted, the denominators are all fairly composite. I suspect there is in fact a very tidy pattern to the denominators, which is obscured because these fractions are reduced to lowest terms. At any rate, that the sums of powers of reciprocals of complex irrational fixed points would lead to rational coefficients just totally blew me away, since any particular logarithm in the sum has irrational coefficients. It's like magic to me that the all of them put together lead to rational coefficients. I'm not entirely sure that this series of rational coefficients will be particular useful in "the" solution, because I'm not sure whether each logarithm should be computed relative to its own branch. By this I mean: Notice the division by ln(b). Should this ln(b) be equal to b? For example, should we consider ln(2.06227773+7.58863118i) to be 2.06227773+7.58863118i or 2.06227773+1.30544587i? If the former, I suspect this power series will play an integral role in the "correct" solution, though it's obviously not correct alone. There would appear to be other singularities, other functions embedded within the slog. Perhaps they too are logarithms, but I'm not sure yet. If the latter, then this curiosity will have to remain a curiosity, exquisitely interesting and unfortunately not of much use. ~ Jay Daniel Fox Gottfried Ultimate Fellow     Posts: 758 Threads: 117 Joined: Aug 2007 09/20/2007, 06:56 AM Jay, for analysis of the denominators I'd check the progression of the greatest prime-factor in them. It looks suspiciously, that the denominators are cancelled factorials. Could you check this (and print the new numerators then) ? Gottfried Gottfried Helms, Kassel jaydfox Long Time Fellow    Posts: 440 Threads: 31 Joined: Aug 2007 09/24/2007, 05:37 AM I'll create a separate thread for discussion of the rational coefficients I found above. While unexpected and intriguing, I have no idea if they have anything to do with tetration. More than likely it's just a fascinating property of the set of fixed points of base e. As for the slog: this function is more and more intriguing, the more I study it. If you look at the standard logarithm, one thing stands out to me. It's the "branches". I'm not sure "branch" is the best description. This isn't new, but it's worth repeating, as the analogy with the slog breaks down in subtle ways. For the logarithm, the derivative at any point is pretty much fixed, regardless of the branch. If you integrate the derivative on a closed loop, you get back to the original point, so long as you don't go around the singularity. If you do go around the singularity, then when you get back to your starting point, you're either up or down a branch (unless you went around multiple times, in which case you might be up or down multiple branches). However, regardless of which branch you're on, everything looks the same, other than the constant difference in "height" ("height" being a complex offset, a multiple of for base e). Well, the slog would appear to work differently. First, we must define the slog very loosely. Essentially, define it by saying that slog(ln(z)) = slog(z)-1 on some branch. For example, slog(0.25+1.25i) might be (pulling a number out of thin air) -0.5+2i. If we exponentiate 0.25+1.25i, we get about 0.4+1.2i, and the slog of that should be about 0.5+2i. Now, if we take the logarithm of 0.25+1.25i, we'll get about 0.25+1.37i, the slog of which should be about -1.5+2i. Now let's look at the point 1.325+1.307i. The slog of this point might be something like 0+3.1i. However, this point happens to be approximately the fourth iterated logarithm of 0.25+1.25i. Therefore, we would have expected an slog of -4.5+2i. What happened? Well, we switched branches, because the fourth iterated logarithm took us up and around the singularity. If we have a power series constructed at the origin, it will only give us values that can be reached without going around the singularity. So far, this isn't terribly interesting. We get the same behavior with the natural logarithm, when we rotate points about the origin. However, where the slog really stands out, is that the derivative is not the same after going around the singularity. If we start at 0.25+1.25 i, then loop around the singularity, we arrive in a totally new landscape. This is easy to see if we remember the fractal nature of the logarithms of the unit interval (0, 1). The second logarithm is a straight line from to , using the principal branch. Notice that this straight line implies that, excepting a complex constant term, the power series of the slog constructed at a point with imaginary part would have real coefficients. Furthermore, this implies that the upper and lower halves (relative to the point) are mirror images of each other (complex conjugates). In other words, it's symmetric about the line y=a+pi*i. But we can use other branches of the natural logarithm to show that it is symmetric about all lines a+pi*(2k+1)*i. And of course, this symmetry necessarily implies symmetry about all lines a+pi*(2k)*i as well, because the real line reflected about an line with constant imaginary part will be another line with constant imaginary part. Now, all this is assuming that we're looking at things from the point of view of the branch which includes the origin. If, from the origin, we go between two singularities (there will be singularities at the primary fixed points, and due to the symmetry, there must be singularities at 2k*pi*i offsets from the primary singularities), we end up in another world, so to speak. If we go between the singularities on either side of the line a+pi*(2k)*i, we end up in the exponential world. Here we get the iterated exponentials of the circular critical region about the origin. If, from the origin, we go up or down and then between the singularities on either side of a+pi*(2k+1)*i, we end up in the logarithmic world. Here we find the fractal world I described in another topic. Curiously enough, each world must in some way affect the other, because the power series at any point, while unable to calculate points outside the radius of convergence, must nonetheless "mesh" with a region calculated from the power series based off a neighboring point. Each world, therefore, affects the other, as its derivatives will influence the behavior towards and around the singularity, like sound waves refracting around a corner. Imagine sound waves refracting around the pillar of a spiral staircase, with the steps of the stairs preventing direct interaction from above or below. ~ Jay Daniel Fox jaydfox Long Time Fellow    Posts: 440 Threads: 31 Joined: Aug 2007 09/24/2007, 05:53 AM (This post was last modified: 09/24/2007, 05:55 AM by jaydfox.) But where does all this analysis even get us? Do we have any hope of unlocking the secrets of this bizarre function? Regardless of whether Andrew's solution ultimately converges on the correct solution, it's at least very close, at least for base e. Even if it's not correct, it gives me a good starting point for analyzing the nature of the slog function and its many branches. The first few terms of Andrew's slog indicate that it's converging on a solution that includes singularities at the two primary fixed points. In fact, using some basic matrix math, I was able to extract the two logarithms that I predicted would exist in the slog solution: By extracting the singularities, I was able to greatly increase the rate of convergence. In fact, I can calculate the first 50 terms more accurately with a 50x50 matrix than I can with a 600x600 matrix using Andrew's original solution. There is some prep work, of course, but it can be done separately to produce a single column vector, which is solved in place of the [1, 0, ..., 0] vector in Andrew's solution. It's that simple. But rather than stop at 50x50, I just went ahead and solved a 640x640 system. I figure it's probably at least as accurate as a 2000x2000 system using Andrew's matrix, and possibly more accurate than a 3000x3000 system. I haven't investigated the rate of convergence of either system to be able to make an accurate prediction, and at any rate, it would take a supercomputer to solve a 3000x3000 system, or a regular desktop several weeks using fast hard disk arrays in place of RAM. Below I show the first few terms of the power series of this conjugate-logarithm pair, then the slog solution, then the "residue" after removing the singularities. Notice that I intentionally omit the constant term, as I don't feel it's well-defined yet. We can at best say that it's a real number. The residue is not due to numerical inaccuracies. Rather, it's due to the fact that the two singularities I removed are only one component of the slog. These two singularities are the primary affectors, when a power series is derived at the origin. I've already predicted singularities at 2k*pi*i offsets of these primary singularities, though those logarithms are so far from the origin that they hardly affect more than the first half dozen terms. The root test of the "residue" seems to indicate another pair of singularities only slightly further from the origin than the first two. The root test for a 640x640 solution climbs as high as 0.67, but studying the progression for smaller solutions indicates that the root test will climb higher, probably to about 0.69, give or take. If I multiply the residue by a few thousand, the root test seems to flatten out, so this seems to indicate that the singularity, whereever it is, is in an extremely narrow "well". It's also possible that multiple singularities lying on the same line of sight from the origin are adding together and appearing to inflate the root test, such that if I had several thousand coefficients, I'd see the root test start to go back down. I have too little information at this point, so I have to rely on good old fashioned though experiments to try to figure this out. One thought I've had is that, because the "worlds" on either side of the singularity are completely different from each other, I'm getting interference as the derivatives "refract" around the singularity I thought I'd removed. I may need to evaluate the slog function at other points closer to the primary singularities and perhaps at various rotations around them, to see if I can locate this next closest singularity, identify it, remove it, and extract even more precision in the power series constructed at the origin. ~ Jay Daniel Fox jaydfox Long Time Fellow    Posts: 440 Threads: 31 Joined: Aug 2007 09/24/2007, 06:37 AM (This post was last modified: 09/24/2007, 06:40 AM by jaydfox.) By the way, if none of what I'm saying is making sense, fear not! I will be providing colorful charts to illustrate my points. Some of what I've described can already be seen in the chart I originally posted in another thread: http://math.eretrandre.org/tetrationforu...440#pid440 [attachment=73] Notice that the red region and the blue region have some overlap, the red region being the fourth iterated logarithm of the blue region. Notice also that the two regions are nothing alike, and hence the derivatives evaluated on either "branch" are nothing like the derivatives evaluated on the other branch. This is in stark contrast to the basic logarithm, where the branches may have different values, but the derivatives at any point are the same regardless of the branch used to evaluate the derivatives. Notice also that the top of this chart (the top of the green and yellow regions) is on the line with imaginary part pi*i. If you reflect everything you see there above this line, taking complex conjugates of all points, you get the analytic continuation. At the line with imaginary part 2*pi*i, the slog evaluates to exactly the same values it would on the real line. Here we see the cyclic symmetry, not unlike the symmetry of a sine-like function evaluated using the imaginary part as the variable. This of course is due to the branches of the natural logarithm, or the fact that exponentiating values differing by a multiple of 2*pi*i will give the same result. To get from the origin to 2*pi*i, we could take the ith iterate of 0, which gets us to the blue cross in the chart, near 0.2+0.85i. Then we'd take the -1.5 partial iterate, i.e., halway between a natural logarithm and a double natural logarithm, which gets us to the intersection of yellow and green dark lines, near 0.05+1.62i. Then we'd take the -ith iterate, to get to -0.366+pi*i. Then we'd take another -ith iterate to get to 0.05+(2*pi-1.62)*i. Then we'd take a 1.5 partial iterate, more than an exponentiation but not quite a double exponentiation, to get to 0.2+(2*pi-0.85)*i. Finally, we'd take an ith iteration to get to 0+2*pi*i. Adding it all up, we get back to 0 iterations from 0, yet we're at 2*pi, not 0, and we didn't even loop around a singularity. ~ Jay Daniel Fox andydude Long Time Fellow    Posts: 509 Threads: 44 Joined: Aug 2007 09/24/2007, 05:26 PM Whoa! Whats the ? Andrew Robbins andydude Long Time Fellow    Posts: 509 Threads: 44 Joined: Aug 2007 09/24/2007, 05:42 PM The reason why I ask is that you have to be very very careful when discussing all fixed points of exponentials, because there are so many branches in every direction. For example lets define a generalized version of the infinitely iterated exponential like this: where W_k is the k-th branch of the Lambert W-function. Using this definition of the infinitely iterated exponential function, we can describe all of the solutions of as indexed by k, but all of the solutions of as indexed by j. So by fixed-points I assume you mean the former, and not the latter. I could be wrong though, so please clarify. Andrew Robbins jaydfox Long Time Fellow    Posts: 440 Threads: 31 Joined: Aug 2007 09/24/2007, 06:01 PM By fixed points, I'm referring to picking one and only one branch of the logarithm, and iterating until you settle on a single value. There is one such value per branch, though they come in conjugate pairs, so the principal branch has two. Anyway, the a_k are the "upper" fixed points of iteration for , with k >= 0. Then the conjugates define the "lower" fixed points. The first few, enumerated to clear up any potential doubt, are: Code:k | a_k 0 | 0.318131505... + 1.337235701...i 1 | 2.06227773... + 7.588631178...i 2 | 2.653191974... + 13.94920833...i 3 | 3.020239708... + 20.27245764...i Now if you alternate branches, there are fixed cycles. The simplest are the conjugating cycles, such as 1.668024052+5.032447064...i. When exponentiated, it simply gets conjugated. I didn't include these or any of the other cycles. There are cycles of all integer lengths. I don't know how any of these affect the slog, and I assume they don't. They would be in other "branches" of the slog not easily accessible from the origin. ~ Jay Daniel Fox jaydfox Long Time Fellow    Posts: 440 Threads: 31 Joined: Aug 2007 09/24/2007, 06:16 PM Note that the a_k I just explained were the ones used in the first post. However, I can see a need for a dual-indexed set, because the slog seems to have singularities at offsets from the a_0, so we'd need a second index to distinguish them. I don't know how long it will take to generate good graphs, but as I've been thinking about the slog, I'm realizing how complex it is. First of all, singularities I once predicted do seem to exist, so there was some small comfort there. They exist on the "logarithmic" branch, accessed for example by going between the singularities at 0.318+1.337i and 0.318+4.946i. Once you've passed between these two singularities, the real line and its offset at 2*pi*i are linear singularities (meaning the entire line is like a "wall"). If you then loop around the singularity again, e.g., between 0.318+1.337i and the real line, then you enter another region, where the real line offset by pi*i is another "wall" singularity. The logarithm of that line is a U-shaped curve (lying on its side), which itself is another singularity. This process continues, looking exactly like the "fractal" graph I posted a few weeks ago. Each of those lines is a 1-dimensional singularity (i.e., not a mere point). I'm not entirely sure what lies on the other side of these singularities, because we can't go "around" them. They extend to infinity at both endpoints. The other side is almost certainly the logarithm from another branch, which would include the singularities with all the strange behavior. Deeper down the rabbit hole we go. ~ Jay Daniel Fox jaydfox Long Time Fellow    Posts: 440 Threads: 31 Joined: Aug 2007 09/27/2007, 07:00 AM I know I promised graphs, and I will get to them eventually, but I don't have the tools I need. I'll likely have to write a program to actually graph the data, probably using a C library to do the calculations. In the meantime, I've been studying the slog, and I'm fairly confident of some of its basic properties. The easiest way to analyze it is to ignore for the time being any specific solution (such as Andrew's matrix solution). Instead, imagine a half-plane, comprised of all complex values with real part less than the real part of the primary fixed points. I.e., all complex numbers with real part less than about 0.318131505. I'm going to call this half plane the "backbone" of the slog. Now, logarithmicize this half plane. (Yes, I made that word up. I've decided it doesn't suit taking logarithms of individual values, but is an appropriate transitive verb for taking the logarithm of a set, e.g., the set of points in a defined region.) When you do this, you will notice a few things. First, if you use only one branch when taking the logarithm of each point, you'll note that the values at the top and bottom of this 2*pi thick branch mesh with each other, including all derivatives. Hence, we can safely make copies at 2*pi*i intervals in order to analytically extend the graph. This is, in effect, filling out all the branches of the natural logarithm. Second, there is a U-shaped region missing, centered along the real line, asymptotically approaching pi/2 above and -pi/2 below, with its smallest real part at log(0.31813) ~= -1.14529. This region corresponds to the "universe" on the other side of the line between the singularities at 0.318+1.337i and 0.318-1.337i, which we did not include. Notice that, because we made copies at 2*pi*i intervals, we'll have corresponding U-shaped cutouts. The part of this new graph with real part greater than 0.31813, e.g., the part that has bulged between the singularities at 0.318+1.337i and 0.318+4.946i, is now in the "logarithmic" part of the slog graph. This is a logarithmic "branch" of the slog, though the slog branches in a fractal manner, so we need to be careful about what we call a logarithmic branch. Once in this logarithmic branch, we are effectively constrained to stay in the current branch. We cannot integrate the derivative along a path to a point in a neighboring branch without hitting a "wall" singularity. We can only get to another branch by integrating a path back between the singularities through which we entered, then between the two singularities which would take us to the other point. This is an important thing to realize, because it will prevent us, for example, from trying to use the other fixed points for continuous iteration. Doing so would require a different function than the slog we're analyzing. The slog which uses the primary fixed points for continuous iteration does not have a branch where continuous iteration from the other fixed points can be performed, unless those branches are behind the "wall" singularities, i.e., where we can't integrate a path to them, and hence we can't numerically analyze them anyway. They may as well be different functions. What if we go the other way? That is, rather than logarithmicize the half plane, what if we exponentiate it? Well, now we get a circle with radius 1.374557..., a circle which would glance our two primary fixed points, had I not specified all complex values with real part strictly less than the real part of the fixed points. Now we are bulging out into the "exponential" part of the slog. Note that upon entering this realm, we lose the branches. Yes, 2.062+1.305i and 2.062+7.589i both equal 2.062+7.589i when exponentiated. However, this is not due to branching at 2*pi*i intervals. Rather, we can integrate a path from 2.062+1.305i to 2.062+1.305i, and if we exponentiate the path (i.e., every point on the path), then we get a loop around the origin. The derivatives at the two points are not the same. They do not "look like" the same point. Remember, back in the "backbone", the points 0 and 2*pi*i "looked like" the same point, as far as all derivatives were concerned. In the exponential realm, two points might go to the same point when exponentiated, but the two points lie in neighborhoods with completely different properties. (I should be careful when saying this. Yes, if we move 0.2 real units away from each point and then exponentiate, we again arrive at the same point. We might be tempted therefore to say that the neightborhoods do look the same. However, these two points are the iterative exponentiations of two different points from an underlying region. To see this, call a=2.062+1.305i and b=2.062+7.589i. Now calculate exp(ln(a)+0.1) and exp(ln(b)+0.1), and note that you arrive at different distances from a and b. Hence, the neighborhoods are not the same.) Now, the really bizarre stuff happens when we take the second logarithm of the backbone. All of those bulges into the logarithmic branches will now appear like teeth in the formerly empty U-shaped region. In effect, the singularities at have now appeared within the U-shaped region, within each logarthmic branch, and all the bizarre behavior of the logarithmic and exponential branches apply as we go between these singularities as well. All of this is done without having to know the exact mathematical "signature" of the underlying slog. However, some slogs will be "better" than others, for example, giving us smoother derivatives near the singularities. I'm hopeful that Andrew's slog will in fact be the "best" of the bunch, giving the smoothest possible derivatives near the singularities. But so far, I can at best say that his solution isn't obviously "wrong". I also have no idea what an slog based on continuous iteration from other fixed points will look like. I may investigate it, to see if it gives me insight into this primary slog. Methinks they will have similar properties (logarithmic and exponential branches, etc.), but ultimately have different power series when constructed at the origin. ~ Jay Daniel Fox « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post Are tetrations fixed points analytic? JmsNxn 2 2,742 12/14/2016, 08:50 PM Last Post: JmsNxn Possible continuous extension of tetration to the reals Dasedes 0 1,233 10/10/2016, 04:57 AM Last Post: Dasedes Removing the branch points in the base: a uniqueness condition? fivexthethird 0 1,463 03/19/2016, 10:44 AM Last Post: fivexthethird Derivative of exp^[1/2] at the fixed point? sheldonison 10 9,604 01/01/2016, 03:58 PM Last Post: sheldonison [MSE] Fixed point and fractional iteration of a map MphLee 0 2,020 01/08/2015, 03:02 PM Last Post: MphLee [Update] Comparision of 5 methods of interpolation to continuous tetration Gottfried 30 27,538 02/04/2014, 12:31 AM Last Post: Gottfried tetration from alternative fixed point sheldonison 21 27,318 12/06/2011, 02:43 PM Last Post: sheldonison Iteration series: Different fixpoints and iteration series (of an example polynomial) Gottfried 0 2,766 09/04/2011, 05:59 AM Last Post: Gottfried Fractional iteration of x^2+1 at infinity and fractional iteration of exp bo198214 10 16,026 06/09/2011, 05:56 AM Last Post: bo198214 attracting fixed point lemma sheldonison 4 9,249 06/03/2011, 05:22 PM Last Post: bo198214

Users browsing this thread: 1 Guest(s) 