Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Funny pictures of sexps
#21
Henryk, I've found some rather bizarre behvior with your method of shifting the Abel function's center. For example, it seems fairly well behaved for real shifts, but even a small imaginary shift seems to produce garbage results (i.e., as I increase the matrix size, I don't get convergence, but rather rapid divergence).

I can use complex math with f(x)=e*(x+x0)-x0 to get the power series for log(x+x0), except the leading constant term of course. This holds true for each value of x0 I have tried so far, including complex values or pure imaginary values.

However, I was unable to successfully recenter the islog_e to x=0.25*i, which is a very small shift, relatively speaking. Why should we be unable to shift away from the real axis, if we are not limited by a radius of convergence?


Actually, I've done a bit more testing just now, and it seems that the answer works if I use e^x-x0, and rather than subtract the identity matrix, i.e., the Carleman matrix of f(x)=x, I instead subtract the Carleman matrix of f(x)=x-x0.

Now that I look at this, I see that your method and mine are essentially identical, except that you have an additional shift involved. I use the Carleman matrices for f(x) = e^(x)-x0 and g(x) = x-x0, while you use the Carleman matrices for f(x) = e^(x+x0)-x0 and g(x) = (x+x0) - x0. In other words, you have this additional substitution of x = x+x0. (Alternatively, I have an extra substitution relative to your method!) This additional shift doesn't seem to be a problem for real-valued shifts, but it's a huge problem for complex shifts.

Oddly enough, where my method is unable to get safely outside the radius of convergence, yours does so readily, though for reals only.
~ Jay Daniel Fox
Reply
#22
(08/25/2009, 09:43 PM)jaydfox Wrote: Henryk, I've found some rather bizarre behvior with your method of shifting the Abel function's center. For example, it seems fairly well behaved for real shifts, but even a small imaginary shift seems to produce garbage results (i.e., as I increase the matrix size, I don't get convergence, but rather rapid divergence).
Well its not shifting of the Abel function (see my post before).
But this is indeed strange, the only thing I can say that I would expect the complex shift yielding complex values on the real axis, thatswhy I didnt try it.

Quote:I can use complex math with f(x)=e*(x+x0)-x0 to get the power series for log(x+x0), except the leading constant term of course. This holds true for each value of x0 I have tried so far, including complex values or pure imaginary values.
And now prove it! Wink
I could already show that the result does not depend on x0 (hopefully its on the forum), but not that it is log.

Quote:However, I was unable to successfully recenter the islog_e to x=0.25*i, which is a very small shift, relatively speaking. Why should we be unable to shift away from the real axis, if we are not limited by a radius of convergence?
As I said, its not a shift of the Abel function but the result may be a different Abel function. I dont know about the mechanisms of its convergence. Proof of convergence of the intuitive method is anyway still open.
Reply
#23
(08/25/2009, 09:39 PM)bo198214 Wrote: I think in Henrici "Applied and computational complex analysis" there are error bounds for the continuation. If I remember right the more iterative steps you do for the continuation the more the size of the original development must increase. But then you can get the transposed coefficients inside arbitrary error bounds.
When I performed analytic extension of the power series, extending to slog(z+1)=0, I had to truncate it. The new power series had a radius still limited by the primary singularities. I didn't take it beyond that point, but based on what I learned from the experience, continuing to extend it like this would have been limited only by the primary singularities, because there are no other singularities in the right-half of the complex plane (for real part greater than 0.31813...), at least not in this particular branch.

Had I continued analytically around one of the primary singularities, then I would have encountered more singularities. I've drawn lots of graphs of the singularities going in the logarithmic direction (clockwise around the upper primary fixed point), but I don't think I have drawn any for going the other direction (where there is a singularity at the origin, for instance).
~ Jay Daniel Fox
Reply
#24
(08/25/2009, 09:55 PM)bo198214 Wrote:
Quote:I can use complex math with f(x)=e*(x+x0)-x0 to get the power series for log(x+x0), except the leading constant term of course. This holds true for each value of x0 I have tried so far, including complex values or pure imaginary values.
And now prove it! Wink
I could already show that the result does not depend on x0 (hopefully its on the forum), but not that it is log.
Maybe I wasn't clear: obviously I don't get the log exactly, but I do get something that is close to the power series for log, and with increasing matrix size it appears to be converge to the log, so empirically speaking it does seem to be resolving to the log.
~ Jay Daniel Fox
Reply
#25
(08/25/2009, 09:59 PM)jaydfox Wrote: because there are no other singularities in the right-half of the complex plane (for real part greater than 0.31813...), at least not in this particular branch.
How do you know?

(08/25/2009, 10:02 PM)jaydfox Wrote:
(08/25/2009, 09:55 PM)bo198214 Wrote: And now prove it! Wink
I could already show that the result does not depend on x0 (hopefully its on the forum), but not that it is log.
Maybe I wasn't clear: obviously I don't get the log exactly, but I do get something that is close to the power series for log, and with increasing matrix size it appears to be converge to the log, so empirically speaking it does seem to be resolving to the log.

Oehm, thatswhy I said it. First you verify it empirically then you prove it. Thats how it often goes in mathematics.
Reply
#26
(08/25/2009, 09:55 PM)bo198214 Wrote: As I said, its not a shift of the Abel function but the result may be a different Abel function. I dont know about the mechanisms of its convergence. Proof of convergence of the intuitive method is anyway still open.
Actually, have you tried recentering the "new" Abel function after deriving it. A quick test at x=0.5 shows that it is probably the same function (hard to tell, I'm doing quick tests with 250x250 matrices, but it's accurate to 8-12 decimal places for the first 100 terms or so).

At any rate, even though it "might" not be the same function, I see no reason it wouldn't be, so I've been assuming it's the same. To assume it is different is to go back to the old argument about uniqueness: it would necessarily introduce a cyclic shift, which would produce spectacularly different results near the primary singularities, and I don't see why this would be at all likely.
~ Jay Daniel Fox
Reply
#27
On the hunt of singularities of isexp at 0 (islog at 1):
If sexp has a singularity on the radius then it should be visible in the conformal plot of slog as an end point of lines.
Thatswhy I did a plot of slog which I extended not via continuation/recentering but by slog(exp(x))=slog(x)+1 and slog(log(x))=slog(x)-1. Starting on the sickle re(z)>re(L) and |z|<|L| (which is contained inside the convergence disk of slog around 1 with radius |L|) where L is the primary fixed point of exp.

   

Indeed there is no singularity visible inside the circle with radius 1.5.
But one can see that to the right side the lines become dense which may count numerically as a singularity and thatswhy does not allow the sexp at 0 to converge for r>1.4.
Reply
#28
(08/25/2009, 10:23 PM)jaydfox Wrote: At any rate, even though it "might" not be the same function, I see no reason it wouldn't be, so I've been assuming it's the same. To assume it is different is to go back to the old argument about uniqueness: it would necessarily introduce a cyclic shift, which would produce spectacularly different results near the primary singularities, and I don't see why this would be at all likely.

Well my thoughts behind that statement was like this:
The matrix power method applied to fixed points is the regular iteration at that fixed point.
Also I would assume the change of function to be continuous in the development point.
Say we have to real fixed points x0 and x1 and going from x0 to x1 as development point of the matrix power sexp. The regular iteration at x0 would slowly transform into the regular iteration at x1, which we know are different (except for fractional linear functions).

As I consider the intuitive iteration as kinda opposite to the matrix power iteration. I would guess (though this is not supported by numerics yet) that the closer to the fixed point I choose my development point x0 the closer is the resulting islog to the regular slog at the fixed point. (To show that the islog to base sqrt(2) is different from the rslog one could consider the half-iterate isexp(1/2+islog(x)) and show that it has a singularity at 2 which is not true for the regular slog). If not so I would wonder how the islog decides which fixed point to take for being regular at.

While matrix power sexp is defined also at the fixed point, the intuitive slog has of course a singularity at the fixed point and so we can not develop islog directly at the fixed point.
But the whole intuitive slog should converge to the whole regular slog at the fixed point when moving the development point to the fixed point.

This consideration would imply different islogs at different development points for bases less or equal than eta. And then I would just consider it quite unprobable that this should be different for bases greater eta.

Well if you have different evidence of any point in my chain of conclusion I would like to hear. My understanding of the complex behaviour of the islog does not suffice that I can support your statement of the spectacular different behaviour at the fixed points. Perhaps you can illustrate with some pictures or thought experiments.
Reply
#29
(08/26/2009, 10:23 AM)bo198214 Wrote: Indeed there is no singularity visible inside the circle with radius 1.5.
But one can see that to the right side the lines become dense which may count numerically as a singularity and thatswhy does not allow the sexp at 0 to converge for r>1.4.
Count "numerically" as a singularity? It's not a singularity, so at best it means that you just need more terms in the power series. And as mentioned before, you must truncate the slog series before reverting, or of course you'll get bizarre results. But the closest singularity is at -2, so that is what limits the radius of convergence.

When I reverted a 600-term truncation of a 1200-term slog, I got a sexp function that approximated the logarithmic singularity at x=-2 very nicely. Indeed, I was able to "remove" the singularity by subtracting the taylor series for log(x+2), and confirm that the residue had an approximately double logarithmic singularity at x=-3. In fact, the Taylor series for sexp will converge on the Taylor series for log(log(x+3)), as this accurately depicts the two closest singularities (well, three technically: one at -2, and one each at -3 in the clockwise and counterclockwise windings to the left of the first). I'm pretty sure I've described this at some point in the past, which is to say that I vaguely recall discussing it with you or Andrew, and I can't imagine that we discussed it anywhere but in the forum; I'll see if I can find it.
~ Jay Daniel Fox
Reply
#30
(08/26/2009, 07:21 PM)jaydfox Wrote: Count "numerically" as a singularity? It's not a singularity, so at best it means that you just need more terms in the power series.
Of course that was what I was saying. Little changes in the intput cause big changes in the output that means increasing the precision, doesnt it?

Quote: And as mentioned before, you must truncate the slog series before reverting, or of course you'll get bizarre results.
As far as I remember you only suggested that for recentering. Which I dont do, I thought that became clear now. The development slog at 0, sexp at -1, behaves like expected with convergence radius 1. So there is no "of course" if I do the same procedure just with a conjugated function.

Quote:But the closest singularity is at -2, so that is what limits the radius of convergence.
Well I mentioned that several times and about this discrepance is all my reasoning, for the case you didnt notice.

Quote:When I reverted a 600-term truncation of a 1200-term slog, I got a sexp function that approximated the logarithmic singularity at x=-2 very nicely.
Are you talking about recentering the Abel function or about my method of shift-conjugation of the original function? Computing the Carleman matrix of exp at some other place than 0 is unexpectedly more time consuming, as I already mentioned. 100 terms take perhaps 10-20 min at my place. While 400 at 0 takes perhaps 5min (without your acceleration method of course)
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  A discussion with pictures of the set of fixed- and n-priodic points of the exp() Gottfried 0 47 05/15/2020, 12:10 PM
Last Post: Gottfried
  Roots of z^z^z+1 (pictures in MSE) Gottfried 3 547 02/09/2020, 03:11 AM
Last Post: Daniel



Users browsing this thread: 1 Guest(s)