Posts: 440
Threads: 31
Joined: Aug 2007
11/02/2007, 10:31 PM
(This post was last modified: 11/02/2007, 10:32 PM by jaydfox.)
~ Jay Daniel Fox
Posts: 1,389
Threads: 90
Joined: Aug 2007
jaydfox Wrote:I can take a stab at it during the next week or two. That would be really nice and I think it will provide us a further interesting result about the difference/equality of the different tetration solutions.
Posts: 440
Threads: 31
Joined: Aug 2007
11/04/2007, 02:24 AM
(This post was last modified: 11/04/2007, 02:26 AM by jaydfox.)
An initial testing with the unaccelerated matrix solver shows that the ) has a singularity with root test of 0.5, indicating (as expected) a singularity at z=2.
As expected, and the terms of the series appear to be roughly converging on the power series of ) , which is what we'd naively predict for the immediate neighborhood of the fixed point, and it also would fit the rslog as defined by Henryk.
I started with a 50x50 matrix, just as a "sanity test", then moved up to a 500x500 system. Precision was very good: a solution with 1024 bits of precision was within 1020 bits of a solution with 1200 bits, indicating that the matrix solver was able to avoid any significant loss of precision.
Next I will try the accelerated version, on the assumption that the singularity at z=2 does in fact converge on the aforementioned logarithm.
Here's the tricky part, however. The location of the logarithm is easy to verify, based on the root test. However, the base of the logarithm is harder to verify. For example, }(z-2.00)) and }(z-2.01)) are easy to tell apart, because by the 200th term, the difference in magnitude of the coefficients is about 1.005^200, or about a factor of e, 2.72. Going further into the series only makes it easier to tell the difference, despite the increased computaitonal requirements.
However, }(z-2.00)) and }(z-2.00)) differ by a constant 1.005 factor, making discrimination between the two far more difficult. Going further into the series does not help in discriminating, outside of reducing the relative size of the terms of the "residue" after removing the singularity.
I bring this up, because the difference between the rslogs of the smaller and larger real fixed points for base sqrt(2) is a very small difference. Therefore, we could probably use the base of the upper fixed point (1/ln(4)) and still get a decent amount of acceleration in convergence, and we would need careful analysis to be able to show that the base for the lower fixed point (ln(2)) is indeed the correct one, assuming that either is correct.
~ Jay Daniel Fox
Posts: 440
Threads: 31
Joined: Aug 2007
Another consideration is that 1/ln(2) is very close to eta. I wonder what effect, if any, this is having in relation to the difference between the rslogs (base sqrt(2)) at the fixed points at 2 and 4.
~ Jay Daniel Fox
Posts: 1,389
Threads: 90
Joined: Aug 2007
jaydfox Wrote:An initial testing with the unaccelerated matrix solver shows that the has a singularity with root test of 0.5, indicating (as expected) a singularity at z=2.
Root test of  would indicate a convergence radius of 2?
So  has a singularity at 2 and at 4?
Posts: 440
Threads: 31
Joined: Aug 2007
Hmm, well, I solved at the origin, so I can only speak for the singularity at 2, though I had already assumed singularities at 2 and 4. Prior to discovering that the rslogs at the two points were different, I would have expected both singularities to be simple logarithms in the limit as we approach the fixed points. But now I suspect that the singularity at z=4 will be a little more screwed up.
Anyway, I can't very well shift the series to be centered at z=3, since I'd have to go around the singularity at z=2, and moving a distance of 4 or 5 units, given a radius of 2, would strip off way too much precision (I'd need maybe 6 to 10 small shifts).
I haven't yet tried shifting the system before solving, though I suppose it's possible in principle, and I'd be curious to see if it worked. The mental gymnastics involved to derive the system prior to solving are a bit beyond me this morning (I hardly slept), so I'd have to think about it this evening.
~ Jay Daniel Fox
Posts: 1,389
Threads: 90
Joined: Aug 2007
11/06/2007, 02:06 PM
(This post was last modified: 11/06/2007, 02:09 PM by bo198214.)
jaydfox Wrote:I haven't yet tried shifting the system before solving, though I suppose it's possible in principle, and I'd be curious to see if it worked. The mental gymnastics involved to derive the system prior to solving are a bit beyond me this morning (I hardly slept), so I'd have to think about it this evening.
Oh, shifting is another interesting topic. What happens if we develop the series for  (or more simple for  ) not at 0 but at some other point  , then derive the slog and compare the back shiftet slog with the original slog. To see whether the method is independent on the development point.
The easiest way is by considering the shifted conjugate:
=e^{x+c}-c) as its iteration is =\exp^{\circ t}(x+c)-c) .
So if we solve )=\gamma(x)+1) this translates to
This means whenever we have a solution  of =\alpha(x)+1) then we have a solution =\alpha(x+c)) to the equation =\gamma(x)+1) . And the question is whether if  is the natural Abel function (Andrew's method) for the first equation, whether then ) is the natural Abel function for the second equation. Or in other words if  is the natural Abel function for the second equation whether then ) is the natural Abel function for the first equation.
The power series development of  is just a linear combination of the powerseries development of  .
Posts: 770
Threads: 120
Joined: Aug 2007
11/07/2007, 08:32 AM
(This post was last modified: 11/07/2007, 08:35 AM by Gottfried.)
bo198214 Wrote:I received an e-mail of Dan Asimov where he mentions that the continuous iterations of at the lower and upper real fixed points, , differ! He, Dean Hickerson and Richard Schroeppel found this arround 1991, however there is no paper about it.
The numerical computations veiled this fact because the differences are in the order of . I reverified this by setting the computation exactness to 100 decimal digits and using the recurrence formula described here:
, where is the fixed point of and and . Henryk -
I'm currently investigating the computation of the different eigenmatrices based on the different fixpoints. Apparently the computations lead to the same tetration-matrices Bs (or Bb) in the examples, where I checked this, and thus to the same coefficients for the exponential series (in column 2 of the constructed Bb-matrix). Now the above seems to say, they are in fact not equal. So I'd like to get more infos about the details of the problem. I could not translate your argument above into my matrix-concept - can you explain a bit more explicite? And: do you know some more arguments for the statement of a difference (unfortunately, Dan Asimov seems to have said, there are no papers available)?
Gottfried
Gottfried Helms, Kassel
Posts: 440
Threads: 31
Joined: Aug 2007
11/07/2007, 02:22 PM
(This post was last modified: 11/07/2007, 02:24 PM by jaydfox.)
bo198214 Wrote:However the real dilemma is as I already pointed out, that every other (analytic) iteration than the regular iteration at the lower and the regular iteration at the upper fixed point must be singular at both fixed points. And for each analytic iteration one of the fixed points is a singularity.
Why wouldn't you have a singularity at both fixed points? I didn't realize what you were saying here, but now that I'm looking at it again, I'm not quite sure I agree.
Edit: Wait, try taking the rslog relative to a number in the interval (2, 4) for base sqrt(2). I usually use e or sqrt(  , but for testing purposes I don't think it matters.
~ Jay Daniel Fox
Posts: 1,389
Threads: 90
Joined: Aug 2007
11/07/2007, 02:27 PM
(This post was last modified: 11/07/2007, 04:05 PM by bo198214.)
jaydfox Wrote:Why wouldn't you have a singularity at both fixed points? I didn't realize what you were saying here, but now that I'm looking at it again, I'm not quite sure I agree.
If I have a development of  (with non-zero convergence radius) at one fixed point then the coefficients of this development are already uniquely determined and this development must be the regular iteration. In this case we know the other fixed point must have a singularity, because if it wouldnt then the development at the second fixed point would be unique and the regular one and we know that both developments do not yield the same analytic function.
It is as if we have a piece of cloth and if we pull it to be smooth at one fixed point it gets corrugated at the other fixed point(s).
Was this your question?
|