Bummer!
#11
bo198214 Wrote:In this line I would be really interested in the difference of Andrew's slog with the regular Abel function at the lower fixed point.

The formula for the latter, derived from here, is:
\( \text{rslog}_b(x)=
\log_{\ln(a)} \left(\lim_{n\to\infty} \frac{a-\exp_b^{\circ n}(x)}{a-\exp_b^{\circ n}(1)}\right) \), where \( a \) is the lower fixed point of \( \exp_b \).

Can you make a comparison of \( \text{rslog}_{\sqrt{2}} \) with Andrew's \( \text{slog}_{\sqrt{2}} \) computed by your super sophisticated algorithm and post a graph of the difference in the range say [-1,1.9] somewhere?!

I can take a stab at it during the next week or two. I haven't tried out Andrew's slog with other bases yet, mainly because I've been so focussed on the base e solution. I'm not sure what will happen with a base less than eta, since the dynamics are a little different, but it should be interesting nonetheless.

Given the very small discrepancy between the regular slogs at the smaller and larger real fixed points, I'd think precision at least as good (if not a few digits more) will be needed to answer this question.
~ Jay Daniel Fox
#12
jaydfox Wrote:I can take a stab at it during the next week or two.
That would be really nice and I think it will provide us a further interesting result about the difference/equality of the different tetration solutions.
#13
An initial testing with the unaccelerated matrix solver shows that the \( \text{slog}_{\sqrt{2}}\left(z\right) \) has a singularity with root test of 0.5, indicating (as expected) a singularity at z=2.

As expected, and the terms of the series appear to be roughly converging on the power series of \( \log_{\ln_2}\left(z-2\right) \), which is what we'd naively predict for the immediate neighborhood of the fixed point, and it also would fit the rslog as defined by Henryk.

I started with a 50x50 matrix, just as a "sanity test", then moved up to a 500x500 system. Precision was very good: a solution with 1024 bits of precision was within 1020 bits of a solution with 1200 bits, indicating that the matrix solver was able to avoid any significant loss of precision.

Next I will try the accelerated version, on the assumption that the singularity at z=2 does in fact converge on the aforementioned logarithm.

Here's the tricky part, however. The location of the logarithm is easy to verify, based on the root test. However, the base of the logarithm is harder to verify. For example, \( \log_{\ln(2.00)}(z-2.00) \) and \( \log_{\ln(2.00)}(z-2.01) \) are easy to tell apart, because by the 200th term, the difference in magnitude of the coefficients is about 1.005^200, or about a factor of e, 2.72. Going further into the series only makes it easier to tell the difference, despite the increased computaitonal requirements.

However, \( \log_{\ln(2.00)}(z-2.00) \) and \( \log_{\ln(2.01)}(z-2.00) \) differ by a constant 1.005 factor, making discrimination between the two far more difficult. Going further into the series does not help in discriminating, outside of reducing the relative size of the terms of the "residue" after removing the singularity.

I bring this up, because the difference between the rslogs of the smaller and larger real fixed points for base sqrt(2) is a very small difference. Therefore, we could probably use the base of the upper fixed point (1/ln(4)) and still get a decent amount of acceleration in convergence, and we would need careful analysis to be able to show that the base for the lower fixed point (ln(2)) is indeed the correct one, assuming that either is correct.
~ Jay Daniel Fox
#14
Another consideration is that 1/ln(2) is very close to eta. I wonder what effect, if any, this is having in relation to the difference between the rslogs (base sqrt(2)) at the fixed points at 2 and 4.
~ Jay Daniel Fox
#15
jaydfox Wrote:An initial testing with the unaccelerated matrix solver shows that the \( \text{slog}_{\sqrt{2}}\left(z\right) \) has a singularity with root test of 0.5, indicating (as expected) a singularity at z=2.

Root test of \( 0.5=\lim_{n\to\infty}\sqrt[n]{a_n} \) would indicate a convergence radius of 2?

So \( \text{slog}_{\sqrt{2}} \) has a singularity at 2 and at 4?
#16
Hmm, well, I solved at the origin, so I can only speak for the singularity at 2, though I had already assumed singularities at 2 and 4. Prior to discovering that the rslogs at the two points were different, I would have expected both singularities to be simple logarithms in the limit as we approach the fixed points. But now I suspect that the singularity at z=4 will be a little more screwed up.

Anyway, I can't very well shift the series to be centered at z=3, since I'd have to go around the singularity at z=2, and moving a distance of 4 or 5 units, given a radius of 2, would strip off way too much precision (I'd need maybe 6 to 10 small shifts).

I haven't yet tried shifting the system before solving, though I suppose it's possible in principle, and I'd be curious to see if it worked. The mental gymnastics involved to derive the system prior to solving are a bit beyond me this morning (I hardly slept), so I'd have to think about it this evening.
~ Jay Daniel Fox
#17
jaydfox Wrote:I haven't yet tried shifting the system before solving, though I suppose it's possible in principle, and I'd be curious to see if it worked. The mental gymnastics involved to derive the system prior to solving are a bit beyond me this morning (I hardly slept), so I'd have to think about it this evening.

Oh, shifting is another interesting topic. What happens if we develop the series for \( b^x \) (or more simple for \( e^x \)) not at 0 but at some other point \( c \), then derive the slog and compare the back shiftet slog with the original slog. To see whether the method is independent on the development point.

The easiest way is by considering the shifted conjugate:
\( g(x)=e^{x+c}-c \) as its iteration is \( g^{\circ t}(x)=\exp^{\circ t}(x+c)-c \).

So if we solve \( \gamma(g(x))=\gamma(x)+1 \) this translates to
\( \gamma(e^{x+c}-c)=\gamma(x)+1 \)
\( \gamma\circ\tau_{-c}\circ\exp\circ\tau_{c}=\tau_1\circ \gamma \)
\( \gamma\circ\tau_{-c}\circ\exp=\tau_1\circ \gamma\circ\tau_{c} \)
\( \gamma\circ\tau_{-c}\circ\exp=\tau_1\circ \gamma\circ\tau_{-c} \)

This means whenever we have a solution \( \alpha \) of \( \alpha(e^x)=\alpha(x)+1 \) then we have a solution \( \gamma(x)=\alpha(x+c) \) to the equation \( \gamma(e^{x+c}-c)=\gamma(x)+1 \). And the question is whether if \( \alpha \) is the natural Abel function (Andrew's method) for the first equation, whether then \( \alpha(x+c) \) is the natural Abel function for the second equation. Or in other words if \( \gamma \) is the natural Abel function for the second equation whether then \( \gamma(x-c) \) is the natural Abel function for the first equation.

The power series development of \( e^{x+c}-c=e^ce^x-c \) is just a linear combination of the powerseries development of \( e^x \).
#18
bo198214 Wrote:I received an e-mail of Dan Asimov where he mentions that the continuous iterations of \( b^x \) at the lower and upper real fixed points, \( b=\sqrt{2} \), differ! He, Dean Hickerson and Richard Schroeppel found this arround 1991, however there is no paper about it.

The numerical computations veiled this fact because the differences are in the order of \( 10^{-24} \). I reverified this by setting the computation exactness to 100 decimal digits and using the recurrence formula described here:
\( f^{\circ t}(x)=\lim_{n\to\infty} f^{\circ n}(a(1-r^t) + r^t f^{\circ -n}(x)) \), where \( a \) is the fixed point of \( f \) and \( r=f'(a) \) and \( f(x)=\sqrt{2}^x \).
Henryk -
I'm currently investigating the computation of the different eigenmatrices based on the different fixpoints. Apparently the computations lead to the same tetration-matrices Bs (or Bb) in the examples, where I checked this, and thus to the same coefficients for the exponential series (in column 2 of the constructed Bb-matrix). Now the above seems to say, they are in fact not equal. So I'd like to get more infos about the details of the problem. I could not translate your argument above into my matrix-concept - can you explain a bit more explicite? And: do you know some more arguments for the statement of a difference (unfortunately, Dan Asimov seems to have said, there are no papers available)?

Gottfried
Gottfried Helms, Kassel
#19
bo198214 Wrote:However the real dilemma is as I already pointed out, that every other (analytic) iteration than the regular iteration at the lower and the regular iteration at the upper fixed point must be singular at both fixed points. And for each analytic iteration one of the fixed points is a singularity.

Why wouldn't you have a singularity at both fixed points? I didn't realize what you were saying here, but now that I'm looking at it again, I'm not quite sure I agree.

Edit: Wait, try taking the rslog relative to a number in the interval (2, 4) for base sqrt(2). I usually use e or sqrt(Cool, but for testing purposes I don't think it matters.
~ Jay Daniel Fox
#20
jaydfox Wrote:Why wouldn't you have a singularity at both fixed points? I didn't realize what you were saying here, but now that I'm looking at it again, I'm not quite sure I agree.

If I have a development of \( f^{\circ t} \) (with non-zero convergence radius) at one fixed point then the coefficients of this development are already uniquely determined and this development must be the regular iteration. In this case we know the other fixed point must have a singularity, because if it wouldnt then the development at the second fixed point would be unique and the regular one and we know that both developments do not yield the same analytic function.

It is as if we have a piece of cloth and if we pull it to be smooth at one fixed point it gets corrugated at the other fixed point(s).

Was this your question?




Users browsing this thread: 2 Guest(s)