Posts: 440
Threads: 31
Joined: Aug 2007
(08/07/2009, 06:49 PM)bo198214 Wrote: I hope we can do some comparisons in the complex plane (because on the real axis differences are mostly too small) in the overview paper. Are you interested to participate? (I still didnt get an answer to that question.) I too want to see the differences in the complex plane. As I pointed out somewhere, any slog solution (or inverse of a sexp solution) that differs from Andrew's by a cyclic "wobble" will necessarily get very different and erratic results as we approach the primary fixed points. Whether this erratic behavior remains finite or it produces singularities is my primary interest, but seeing the behavior in general would be nice.
As to the paper: I would like to participate, though I'm not sure what's involved in getting set up to do it, and at any rate, I need to review all my old posts to see how much of the work can be recycled. Also, I'd been hoping to get answers to the questions about the differences in the complex plane, so I've been focussing my recent efforts on reviving old SAGE scripts and crunching some numbers.
~ Jay Daniel Fox
Posts: 1,389
Threads: 90
Joined: Aug 2007
(08/07/2009, 07:24 PM)jaydfox Wrote: As to the paper: I would like to participate, though I'm not sure what's involved in getting set up to do it, Setup is described here
Quote: and at any rate, I need to review all my old posts to see how much of the work can be recycled. Also, I'd been hoping to get answers to the questions about the differences in the complex plane, so I've been focussing my recent efforts on reviving old SAGE scripts and crunching some numbers.
Well that comparison would be a great contribution to the paper.
Posts: 640
Threads: 22
Joined: Oct 2008
08/10/2009, 11:37 PM
(This post was last modified: 08/10/2009, 11:43 PM by sheldonison.)
(08/07/2009, 05:14 PM)jaydfox Wrote: My change of base formula relies on the following:
.
I had found that it was computationally more accurate to work with the double logarithm of x..... I've been playing around with this for a few months, on and off. I believe this limit does not converge in the complex plane. First, some background, lets take Jay's forumula, and change base a to e, and base b to eta, and then plug in cheta(x+theta) for x. Here, is the 1cyclic base conversion constant that oscillates with a small amplitude around the base conversion "constant". See my previous post for some more details.
The second equation is interesting, especially since cheta is entire. If the theta is replaced with a constant, we get Jay's base change version of the sexp solution. So it would be nice to get a graph of the f(x) limit equation in the complex plane. Here's an overview of my attempt to explain why the f(x) limit equation may not converge in the complex plane. First, show the f(x) limit converges nicely for some particular real value of x. Second, show that f(x+imaginary) converges to a very different number. Third, show that no matter how small the imaginary component is, f(x+imaginary) converges to a very different number than f(x). Then the slope in the complex plane is discontinuous, and perhaps the convergence radius is zero, or f(x) is not analytic. There is one other possibility that I cannot rule out, and that is that the multivalued logarithm allows for a solution that does converge.
The limit equation for base e, eta, f(x) takes on complex values for x<=4.384, and this is expected. For larger real values of x, the limit is well defined, real valued, and quickly converging. I analyzed f(x=4.7), as k increases. The double log formula makes it fairly easy to reach arbitrarily large precisions. For k=5, the eta power tower is already 8.3*(10^150).
k=0, f(x)=4.7
k=1, f(x)=1.729, shortcut, f(x)=x/e
k=2, f(x)=0.729, shortcut, f(x)=x/e1, Jay's double log forumula
k=3, f(x)=0.0705
k=4, f(x)=0.4237
k=5, f(x)=0.5638
k=6, f(x)=0.5642928...
k>=7, f(x)=0.5642928....
Convergence works perfectly for real numbers>0.4385. Now we try complex numbers, x=4.7+0.2i. As near as I can tell, the sequence will approach the fixed point for sexp_e.
k=0, f(x)= 4.7 + 0.2i
k=1, f(x)= 1.7290 + 0.0736i
k=2, f(x)= 0.7290 + 0.0736i
k=3, f(x)= 0.0754 + 0.1418i
k=4, f(x)=0.3640 + 0.3394i
k=5, f(x)=0.4191 + 0.4583i
k=6, f(x)=0.4184 + 0.4574i
k=7, f(x)=0.0342 + 1.4238i
k=8, f(x)= 0.3094 + 1.6065i
k=9, f(x)= 0.4950 + 1.3975i
k=10,f(x)= 0.4020 + 1.2313i
k=11,f(x)= 0.2605 + 1.2507i
k=12,f(x)= 0.2423 + 1.3639i
k=13,f(x)= 0.3247 + 1.3964i
....
Here is the sequence of exp_eta(x) that leads to f(x) approaching the fixed point of e. Notice that at k=3, the cos(imaginary_component/e) is negative.
k=0, x=4.7 + 0.2i
k=1, eta^x= 5.619958178 + 0.414241171i
k=2 eta^eta^x = 7.813166879 + 1.199958104i
k=3, 16.01506009 + 7.567768883i cos(imag/e) negative
k=4, 339.0928882 + 126.694129i real value negative
k=5, 5.79817E55 + (3.28697E55)i real value close to zero
k=6, 1 + (1.20921E55)i
k=7, 1.444667861 + (6.42651E56)i
k>=8 grows arbitrarily close to e.
There's nothing special about the starting point of i=0.2. Any nonzero imaginary starting point will eventually lead to the magnitude of the imaginary component of exp_eta growing, until its imaginary component is greater than e*(pi/2). If at that point in time the cos(imag/e)<0, which is a 50/50 proposition, than the real component for the next iteration goes negative. The next iteration after that is very close to zero. And from that point forward, the exp_eta will grow forever towards e. The logarithms are multivalued so that's trickier, but it would appear that they approach the fixed point of sexp_e.
Moreover, there is no "limit" in the conventional sense. No matter how close the initial imaginary portion approaches zero, eventually the iterated super exponential will grow so that the imag/e>pi/2, and from that point on, its a random roll of the dice. Again, the multivalued logarithm may allow for a different solution that does converge in the complex plane. However, the increasingly fractal nature of the repeated exponentiation may makes it difficult to generate a continuous solution. Some values of the initial imaginary may arbitrarily lead to all positive values of the cosine(imag/e) at every iteration, and grow forever, and lead to nicely behaved values of f(x). However, most values of the initial imaginary component will eventually lead to a negative component of cosine(imag/e), and will henceforth lead to exp_eta growing towards "e". The combination of the two types of behavior may provide a way to show the limit doesn't converge.
 Sheldon Levenstein
Posts: 440
Threads: 31
Joined: Aug 2007
08/11/2009, 04:30 PM
(This post was last modified: 08/11/2009, 04:36 PM by jaydfox.)
(08/10/2009, 11:37 PM)sheldonison Wrote: (08/07/2009, 05:14 PM)jaydfox Wrote: My change of base formula relies on the following:
.
I had found that it was computationally more accurate to work with the double logarithm of x..... I've been playing around with this for a few months, on and off. I believe this limit does not converge in the complex plane.
Here's an overview of my attempt to explain why the f(x) limit equation may not converge in the complex plane. First, show the f(x) limit converges nicely for some particular real value of x. Second, show that f(x+imaginary) converges to a very different number. Third, show that no matter how small the imaginary component is, f(x+imaginary) converges to a very different number than f(x). Then the slope in the complex plane is discontinuous, and perhaps the convergence radius is zero, or f(x) is not analytic. There is one other possibility that I cannot rule out, and that is that the multivalued logarithm allows for a solution that does converge. I'm not sure if the limit exists in the normal sense (i.e., it's welldefined for real numbers, but as you say, not for nonreal numbers).
However, rather than two distinct points x and x+d*i (where d goes to 0), consider a line segment L defined between those two points. As exp_b(x) is entire, the image exp_b(L) will be continuous, as will be exp_b(exp_b(L)), etc. Thus, no matter how many times this curve wraps around the origin in some bizarre fractal nature, there is always a welldefined "path" back to the real line.
We can then iteratively perform the logarithm log_a(x), which has branches. We start at the real endpoint of exp_b^[ok](L) as we're performing the logarithms, so that when we wrap around the origin, we will always "know" which branch of the logarithm to use.
When all is said and done, we will arrive at the correct location, with no ambiguity. However, without using this "trick" to determine which branch to use, it does seem that this limit does not converge properly for nonreal x. (And at any rate, for base b>eta, it definitely does not converge on the way "up", even if by some miracle it manages to converge on the way back "down").
(And yes, I ignored the change of base "constant" for simplicity.)
~ Jay Daniel Fox
Posts: 1,389
Threads: 90
Joined: Aug 2007
Can you refresh my memory why changeofbase only makes sense to bases ?
Posts: 440
Threads: 31
Joined: Aug 2007
08/11/2009, 07:46 PM
(This post was last modified: 08/11/2009, 07:47 PM by jaydfox.)
(08/11/2009, 07:41 PM)bo198214 Wrote: Can you refresh my memory why changeofbase only makes sense to bases ? Well, it can actually be applied to any base greater than 1. I assume you're referring to where I said:
jaydfox Wrote:And at any rate, for base b>eta, it definitely does not converge on the way "up", even if by some miracle it manages to converge on the way back "down"
For bases less than or equal to eta, it will converge on the lower fixed point as we go "up". For bases larger than eta, there aren't any real fixed points, so convergence never happens.
Also, my change of base formula works from infinity down, so to speak, so for bases less than or equal to eta, we can't properly get to 0, so it doesn't give tetration per se (though it is a form of superexponentiation, just not centered at 0).
~ Jay Daniel Fox
Posts: 1,389
Threads: 90
Joined: Aug 2007
(08/11/2009, 07:46 PM)jaydfox Wrote: Well, it can actually be applied to any base greater than 1.
So did someone check already whether base conversion of regular iteration gives again regular iteration (say at the lower fixed point)?
Quote: I assume you're referring to where I said:
jaydfox Wrote:And at any rate, for base b>eta, it definitely does not converge on the way "up", even if by some miracle it manages to converge on the way back "down"
For bases less than or equal to eta, it will converge on the lower fixed point as we go "up". For bases larger than eta, there aren't any real fixed points, so convergence never happens. Sorry, I dont know what you mean by "converging on the way up/down".
Quote:Also, my change of base formula works from infinity down, so to speak, so for bases less than or equal to eta, we can't properly get to 0, so it doesn't give tetration per se (though it is a form of superexponentiation, just not centered at 0).
If you mean the translation in the argument I wouldnt care about that.
Posts: 440
Threads: 31
Joined: Aug 2007
08/11/2009, 09:31 PM
(This post was last modified: 08/11/2009, 09:32 PM by jaydfox.)
(08/11/2009, 08:05 PM)bo198214 Wrote: (08/11/2009, 07:46 PM)jaydfox Wrote: Well, it can actually be applied to any base greater than 1.
So did someone check already whether base conversion of regular iteration gives again regular iteration (say at the lower fixed point)? Well, when I tried a couple years ago, I got different results when using eta and sqrt(2) (using the upper fixed point), so I assume in general that base conversion does not give the same results as regular iteration from the upper fixed point.
Not sure about the lower fixed point, but I would not be surprised if it did match, because regular iteration from the lower fixed point is found by iteratively exponentiating until we reach the fixed point, and the first part of the change of base formula relies on iterative exponentiation.
Quote:Quote:jaydfox Wrote:And at any rate, for base b>eta, it definitely does not converge on the way "up", even if by some miracle it manages to converge on the way back "down"
For bases less than or equal to eta, it will converge on the lower fixed point as we go "up". For bases larger than eta, there aren't any real fixed points, so convergence never happens. Sorry, I dont know what you mean by "converging on the way up/down".
Ah, sorry. On the way "up" refers to the iterative exponentiation, which for reals will tend to go "up". On the way down refers to the numbers getting smaller with iterated logarithms (at least for the reals).
I think of it as climbing up a mountain of iterated exponentials in one base, then back down a mountain in the other base (undoing the exponentials by taking logarithms). Just a metaphor, and your mileage may vary.
~ Jay Daniel Fox
Posts: 1,389
Threads: 90
Joined: Aug 2007
08/11/2009, 10:23 PM
(This post was last modified: 08/11/2009, 10:25 PM by bo198214.)
(08/11/2009, 09:31 PM)jaydfox Wrote: (08/11/2009, 08:05 PM)bo198214 Wrote: So did someone check already whether base conversion of regular iteration gives again regular iteration (say at the lower fixed point)? Well, when I tried a couple years ago, I got different results when using eta and sqrt(2) (using the upper fixed point), so I assume in general that base conversion does not give the same results as regular iteration from the upper fixed point.
It does not even need to be applied on base . Just take two bases and consider their regular superexponentials at (say) the lower fixed point and .
Do then both superexponentials transform according to your change of base? I.e. do we have:
.
I doubt, I suppose we see again the wobble.
Quote:I think of it as climbing up a mountain of iterated exponentials in one base, then back down a mountain in the other base (undoing the exponentials by taking logarithms). Just a metaphor, and your mileage may vary.
Dont understand me wrong, I like your metaphors, however I also like if you accompany your epic descriptions with some unmistakable formulations and formulas.
Posts: 1,389
Threads: 90
Joined: Aug 2007
(08/07/2009, 06:03 PM)Gottfried Wrote: just for the record; there was a discussion about this recently in sci.math. See Logarithm of repeated exponential
Gottfried, thanks for connecting the topics.
The sci.math thread finally contains a proof that is converging at all.
