# Tetration Forum

Full Version: Continuous iteration from fixed points of base e
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3
To be honest, I'm stumped on how to squeak more accuracy out of Andrew's solution. I've gone through the analysis, and I know where all the singularities are that can be "seen" from the origin. I can drill into any branch and locate singularities. And I just don't see any close enough to explain the radius of convergence of the residue.

What I can imagine being the case is that, from the point of view of the origin, if I subtract out the two primary singularities, then as I pass the singularities on my way out from the origin, I'll see a split. If I were sitting at the origin, looking at the upper primary fixed point (0.318+1.337i), I would see a discontinuity to the left and to the right of the fixed point. At the fixed point, I would see the start of a "tear" or "rip" in the fabric of my graph. As I look farther and farther beyond the fixed point, the gap between the values would increase.

This would explain a few features of the power series, as I'm observing it. Very near the fixed point, the difference is small, but farther away, the difference is large. So the root test starts out rather low, because the tear is hardly perceptible near the fixed point. As I look further into the power series, the root test climbs, very slowly, up towards 0.67, 0.68, perhaps 0.69. I conjecture that after a very large number of terms, perhaps thousands, I'll climb up to 0.70, and then 0.71. Eventually, of course, if I could get tens of thousands of terms, I would begin to approach the 0.727, which would put my radius of convergence back at the primary fixed point. After all, the tear starts there, so I can't really get past it. But the tear is so subtle that it doesn't really affect the first few hundred derivatives very much at all.

Another feature of the power series of the residue is that it has a cycle pattern to it, with a period of just under 7, which puts its direction from the origin at an angle of just under 1.346 radians, pretty much where we'd expect it to be if it were in fact at the fixed point itself (1.337 radians).

Now, removing a point singularity, especially one as simply as a logarithm, is trivial. I was proud I figured it out, but in hindsight it was actually pretty obvious. But how do I remove a "tear" in the fabric of my graph? It's not a point, and it's certainly not going to be as trivial a function as a logarithm. I have no "trick" for squeezing more precision out of my slog. For that matter, I had hoped to come up with a simple definition of the slog in terms of a combination of (perhaps an infinite number of) more basic functions. Now I'm not so sure we'll be so lucky.

At any rate, I have enough precision with my 700x700 solution (and I think I have enough memory to go to about 750x750, since I'm no longer using rational math), so I will start working on the pretty graphs I've been promising. It may be some time next week before I get some good ones generated.
jaydfox Wrote:I have no "trick" for squeezing more precision out of my slog
Ahem... Andrew's slog, or "the" slog, but certainly not "my" slog. At best I discovered a method of extracting greater precision in a practical implementation, and I discovered the nature of the singularities causing the radius of convergence, including an explanation of how continuous iteration from a complex fixed point can yield a real-valued function. But it's not "my" slog.
However it never became clear to me what your actual conjecture is as you always work with approximations. So I made some own thoughts:

Let be the regular Abel function developed at the primary fixed point and let be the regular Abel function developed at .

Then we can set and this is again an Abel function (slog) for :

.
.

This would also yield an Abel function for other bases and for any pair of conjugated fixed points.

Is your conjecture now that Andrew's slog is this primary ? For me it would also be interesting what happens for . In continuation of the above idea for a real fixed point we simply would get its regular iteration. However which of the both real fixed points is the happy one for which the regular Abel function is Andrew's slog?
As I now have the fundamentals of computing the regular slog for a given fixed point, we can compute
where is the principal Abel function of at the fixed point and I provide the graph of where is the primary fixed point in the upper half plane of .
The result looks quite strange:
[attachment=96]
and is surely not Andrew's slog.

I mean there are to be expected kind of singularities at which can well be seen. However another strange thing happens, the Abel function is not continuous at a strange point as we see when we extend the range:
[attachment=97]
I have no explanation yet.
Ah, now I see wherefrom the discontinuity comes. The Abel function is the logarithm (to base ) of the Schroeder function and the Schroeder function crosses the line (the negative real Axis). And there the logarithm changes imaginary part from to and gives this discontinuity (in the real part because we divide by another complex number).

Schroeder function for in the complex plane:
[attachment=98]

It starts in the upper right area and evolves clockwise until it crosses the negative real axis at approximately .
(The peaks (which belong to ) are truncated due to the maple graphing routine.)
Of course this discontinuity can be removed, but in the moment I am too lazy to provide the continuous graph.
I need to take stock of where I'm at, because what seemed complicated before is even more complicated, now that I have a better handle on this monster.

First of all, I want to be clear that my conjecture on the slog being like the sum of two complex conjugate parts isn't to imply that either part on its own is an Abel function, or whatever the terminology would be. It's easy to see why when we consider a point where either function on its own is a complex value, such as 0.5+0.2i. The conjugate is 0.5-0.2i, and the averaging of these two is the real number 0.5. However, exp(0.5+0.2i) = 1.616+0.328i, and averaged with its conjugate we get 1.616 (just keeping a few sig-figs). However, exp(0.5) is 1.649, so clearly it's not just a matter of adding two otherwise complex-valued continuous tetration solutions. No, it's a more subtle blending that is occurring here.

So yes, I see both primary fixed points being involved, yielding singularities that are very log-like. In fact, to see this in action, consider the following functions of complex z, with and as the primary fixed points for base e:

has singularities at the two primary fixed points. Interestingly, has singularities at the primary fixed points, as well as at offsets of the fixed points. Going a step further, has singularities at the natural logarithms of all these points, in all branches (i.e., at offsets).

In fact, you could continue with this process, and you should easily be able to verify for yourself that each of these singularities is a singularity in the slog for base e, if you drill deep enough into the logarithmic branches.

But in the exponential branches, these singularities cannot be "seen". Unlike the plain logarithm, in which each branch looks the same as another, the various branches of the slog have a fractal nature to them, so that we cannot easily define how the slog "looks" at any point without also making explicit which branch we are on. The origin is a fairly special case, which is difficult to describe without pictures.

The slog is not a function, nor is tetration. This is important to understand. The logarithm is not a function, in the sense that analytically continuing it from any point by integrating its derivative yields multiple values for any point. Yet exponentiation is in fact a function, so long as you're clear on how you define the base.

For example, is a function, but the inverse is not a function. Exponentiation is like f(z), while the logarithm is like , with multiple values possible.

On the other hand, the slog and sexp are like the trying to find functions y=g(x) and x=h(y) of the relation . Neither g nor h is a function, though we can limit ourselves to certain domains and find functions for those "parts".

From this frame of reference, with x and y as complex numbers, and thus four degrees of freedom. I.e., we have a 4-D space with a 2-D fractal surface which represents all points on this slog/sexp relation. We might try to find x=slog(y), or we might try to find y=sexp(x). Either way, we're not really finding "functions", not unless we limit our domains very carefully.

This will also help us to understand why the slog doesn't "look" the same at a given point, depending on which branch we're in. Unlike the logarithm, which is like a simple "screw" in 4-D space, the slog is much more complex. However, it's all one 2-D manifold, as far as I can tell, with continuous derivatives at all points in all branches (except at the singularities), so the entire structure's information can be encoded by the power series developed at the origin in the "backbone".

I've been able to extract a fair amount of precision from the matrix method, but at this point the numerical precision is of less concern to me than uncovering a more fundamental method of deriving the solution. The matrix method yields the correct singularities with the correct structure, but it's still not at all obvious to me that this should ever have worked. It's still "magic" to me, still some sort of voodoo. I'd rather derive it from basic principles (basic being a relative term here).

I suppose none of this is making sense, since I still have yet to post source code or decent pictures, other than the few. Those few should be sufficient for those who study them, but more might be needed for those with less time to devote to forming complex mental abstractions.
Dear Jaydfox,

jaydfox Wrote:The slog is not a function, nor is tetration. This is important to understand. The logarithm is not a function, in the sense that analytically continuing it from any point by integrating its derivative yields multiple values for any point. Yet exponentiation is in fact a function, so long as you're clear on how you define the base.

For example, is a function, but the inverse is not a function.

You are right! Nevertheless, we have to pay attention to the fact that mathematical analysis (M. A.) is supplying us with very important tools, such the concepts of "function" and "continuity", which are absolutely indispensable for a correct analysis of a problem but that, nevertheless, are only (... extremely important) tools. In fact, it is clear that is a "function", but its inverse is not a "function", the "continuity" of which cannot be analyzed. Nevertheless, we also perfectly know that an inverse operation of the "square-of-x" exists and that it is the two-valued "square-root-of-x". The graph of the second object (it is NOT a "function") can be easily obtaind by a simple change (x<->y) of variables, without any other kind of modifications. All of us know that this is true, but we cannot express such a thing with a standard mathematical language.

For instance, I know that sqrt(4) = {-2,+2}, but I was never able to find a pocket calculator that gives this correct result (not even "Mathematica" does). It is tacitely understood that we can simply calculate the "principal branch", in order to get sqrt(4) = 2 and that, after that, we smartly duplicate the operation on the symmetrical (negative) branch, for obtaining the required second value (-2). This fact disturbed me since I was almost a child. I was always thinking that the adults were not serious people!

Your example of the circle (with r = 2) is also amazing. According to the standard M. A. a circle cannot represent a "function" and we cannot verify its "continuity". On the contrary, the little devil I have in my brain suggests that its graph is absolutely continuous and that it is a two-valued "function". Its derivative must also be continuous and "two-valued".

Réné Thom, with his "Théorie des Catastrophes" analyzed these strange (but, at the same time, quite normal) mathematical objects and was able to classify what he called elementary "catastrophes". But, as far as I know, he didn't succeed to formulate a consistent new extension of the standard M. A. for covering these important objects.

I discussed with Henryk about that and he tried to convince me that "something exists already" (e.g.: Riemann surfaces) and that mathematics is not just a philosophical opinion. I agree in principe. But, on the practical ground, ... I doubt!

GFR
I think all of us could use a course in analytic continuation. Some of the things we're talking about in this thread rely heavily on theorems and terminology in analytic continuation theory. Some other related subjects include cohomology theory, fiber bundles, sheaf theory, and Riemann surfaces.

I think if we at least use the right terms, then we could all understand a little more. For example, different parts of the analytic continuation of a power series to its corresponding Riemann surface are not called "the logarithmic landscape" but could instead be called germs or sections.

There might even be nice formulas for analytic continuation, easily found if only we knew to search for "X theorem", since search terms can be a desicive factor in how successful a search is. I'm not sure if I can suggest anything in particular, but it certainly seems as if a little bit of rigor would do us some good.

Andrew Robbins

PS. A circle is a parametric plot of two sinusoidal functions. For concepts similar to a function, but impossible to describe with a function, see bipartite graph (and for self-maps, disjoint unions allow ).
My main concern is how to display complex relations.

For example, to display a real relation, we simply display a graph of all the valid points in an space. This allows the graph to be neither one-to-one nor onto. The inverse relation is displayed simply by reflecting the graph around the line x=y, so to speak.

But to display a complex relation, we need a graph in a space, which is essentially a 4-D space. We could use a 3-D graph to display one of the complex variables against the real part, or the imaginary part, or the modulus, or the argument, of the other complex variable. Indeed, I've seens such 3-D graphs at wikipedia and other sites.

And eventually I plan to do such. For now, I'm trying to stick to 2-D graphs, and this leaves me drawing contour lines (real and imaginary). It works, but it takes some mental acrobatics to "read" the graph, and it's hard to pick out such elementary features as fixed points. It'd be nice to be able to develop a "sixth sense" for the 4-D structure of a C x C relation.

As for more basic concepts, I've studied a bit of tensor calculus over the years, so I'm familiar with Riemann surfaces and change of coordinates and the such. (I've studied special and general relativity pretty extensively over the past decade or two.) Once I came to grips with the structure of the slog, it didn't bother me as much that it has such oddities as singularities that are present in some branches and not present in other branches. Indeed, it reminds me of studying gravitational lensing, and other oddities of relativity theory. We're not in a Lorentzian space, but a complex space is weird in its own ways. For example, complex transformations preserve angles, at least at infinitesimal scales.

But topology is a tougher subject. I never appreciated that until this last year, when I tried to teach myself some basic topology. To be honest, differential equations and tensor calculus seem like a cakewalk compared to trying to understand the first chapter of my topology book!
The formula you give is very interesting. You say you are unsure whether the denominators should be or (base e), so I'm going to assume the most general form of both of these:
Then you show that where are the fixed points of b, which I still have my doubts about. What I noticed recently while re-reading your posts is that since you are unsure about what the are in the above formula, given that we kind of "know" the slog function with approximations, we can solve for using a system of linear equations.

Using Mathematica, it is a very simple matter to show that:
which naturally implies that, if , then:
(for n > 0), which is a system of linear equations, which can be solved for . This could provide a way to determine if should be just or numerically.

Andrew Robbins
Pages: 1 2 3