Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Funny pictures of sexps
#11
(08/25/2009, 07:04 PM)jaydfox Wrote: If you try reverting the 60-term truncation for the series developed at x=1, you will likely see much better behavior of the isexp_e power series. Try it and let me know.

you mean 60 out of 100 or 60 out of 60?
Reply
#12
(08/25/2009, 07:04 PM)jaydfox Wrote: Update: Ach, I was looking at your graphs upside down! I always neglect the negation when using the root test, so I look at the reciprocal of the apparent radius of convergence. So where you graphs shoot up right at the end, mine would have shot down. So perhaps I'm misinterpreting your results.

Ya I was thinking which to take and decided to take the one that would indicate the radius of converges and not its reciprocal.
Reply
#13
(08/25/2009, 07:11 PM)bo198214 Wrote:
(08/25/2009, 07:04 PM)jaydfox Wrote: If you try reverting the 60-term truncation for the series developed at x=1, you will likely see much better behavior of the isexp_e power series. Try it and let me know.

you mean 60 out of 100 or 60 out of 60?
Get the 100 terms for isexp_e, but then truncate to 60 before reverting the series. Truncating to 50 would be even more accurate, but necessarily give you fewer terms to work with. Undecided
~ Jay Daniel Fox
Reply
#14
(08/25/2009, 06:47 PM)bo198214 Wrote: Edit: Ah now I get what you mean with recentering. If you have already a truncated powerseries and want to know the powerseries development at a different point then of course this different point must lie inside the convergence radius of the powerseries. But see Jay here it is different I dont recenter a (truncated) powerseries, but a function and *then* I compute its powerseries. I can do that at any point of the function without regarding convergence and convergence radiuses.
I'm not so certain. Again looking at the logarithm, there are branches, so if we recenter, how do we know which branch we are within?

Perhaps this isn't an issue, because the branches are identical except for a constant difference.

But what about log(log(x+0.5))?

Here, the branches do matter. Recenter to from x=0.5 to x=0.5i. Then recenter to x=-0.5. Then to x=-0.5i. And finally to x=0.5 again.

We have completed a winding around a double logarithmic singularity. The function is now quite different, not just due to a constant difference. The only way to get the correct answer is by analytic extension, which involves the very same "shift and truncate" approach I outlined. Shift the power series, then truncate it because the last fraction of the terms have become inaccurate. Repeat as necessary.

Even though it might not be explicitly obvious, I suspect the same problem is hidden in the steps you have taken.

Edit: By the way, back when I had tried converting my accelerated slog solution into a sexp function, I had to shift the slog at x=0 to x=1. When it came time to revert the series, I found I could only keep at most half of the terms before reverting. So my 1200-term slog solution yielded about a 600-term sexp solution. And mind you, this was with my accelerated solution, which is vastly more accurate than the unaccelerated intuitive solution.
~ Jay Daniel Fox
Reply
#15
(08/25/2009, 07:35 PM)jaydfox Wrote: I'm not so certain. Again looking at the logarithm, there are branches, so if we recenter, how do we know which branch we are within?

Right this is the question if you have a function with singularities (and only in this case the radius is not infinity and hence different continuations as you described may lead to different values).

However my recentering took place at the entire function .

Quote:Even though it might not be explicitly obvious, I suspect the same problem is hidden in the steps you have taken.

It may be that something similar is here in play. Indeed the development at 1.5 is though solvable but inaccurate:

   

Edit: On the other hand it seems that only the additive constant needs to be adapted. The recentering process merely guaranties that the result is again an Abel function but does not guaranty the value -1 at 0.
Reply
#16
(08/25/2009, 07:44 PM)bo198214 Wrote:
Quote:Even though it might not be explicitly obvious, I suspect the same problem is hidden in the steps you have taken.

It may be that something similar is here in play. Indeed the development at 1.5 is though solvable but inaccurate:
Well, when in doubt, test with something we know the answer for. If I recall correctly, this method can be adapted to find the Abel function of cx, namely the logarithm base c? f(x) = cx is of course entire, so it should be quite amenable to your recentering approach.

Edit: But I can imagine without performing the computations that this would probably not tell us anything, because the branches of the logarithm are identical, except for a constant difference. This is not true of the slog.
~ Jay Daniel Fox
Reply
#17
(08/25/2009, 07:56 PM)jaydfox Wrote: Well, when in doubt, test with something we know the answer for. If I recall correctly, this method can be adapted to find the Abel function of cx, namely the logarithm base c?
This is still an open question whether intuitive slog of cx is log_c!
The method is too complicated to get grip of it theoretically.
Reply
#18
(08/25/2009, 08:12 PM)bo198214 Wrote:
(08/25/2009, 07:56 PM)jaydfox Wrote: Well, when in doubt, test with something we know the answer for. If I recall correctly, this method can be adapted to find the Abel function of cx, namely the logarithm base c?
This is still an open question whether intuitive slog of cx is log_c!
The method is too complicated to get grip of it theoretically.
Sorry, I was misremembering some old posts of yours, where you show that the Abel function of x+c is x/c, or something like that.

Edit: Hmm, experimenting with f(x) = e*(x+1)-1 seems to be solving to the power series of log(x+1). So it does indeed seem to be solvable by this approach. Obviously we can't solve it at x=0.

The first 10 terms of a 250-term solution are 0.999350826516384*x - 0.498825775056848*x^2 + 0.336537863799800*x^3 - 0.258262994469142*x^4 + 0.203808987395232*x^5 - 0.157090948593760*x^6 + 0.121920358237637*x^7 - 0.103864057515499*x^8 + 0.101694333870558*x^9 - 0.108441429864967*x^10
~ Jay Daniel Fox
Reply
#19
Now I corrected my code to get the constant right.
At 1.5 there is a similarly "good" islog as there is at 0.
Moreover the effect indeed points toward a singularity of some isexp's at radius . This cant be seen for the islog at 0 because isexp is then at -1 and can only have radius 1.
The following both conformal plots - first isexp at -1 (inv of islog at 0), second isexp at (inv of islog at 1.5); both are continued to the left and right by log and exp - show this similar limitation of the convergence radius which must be caused by a singularity.

       
The first picture (isexp@-1) shows the conformal map of (-1,1)x(0,0.9).
The second picture (isexp@0.5) shows the conformal map of (-1,1)x(0,1.4).

The singularity of isexp@-1 is -2 and causes the convergence radius to be 1.
The convergence radius of isexp@0.5 must be reduced to by some other singularity than -2.
Reply
#20
We may also talk about different things.
You talk about continuing the islog to points outside the convergence radius.
But I talk about possibly different islogs (or isexps) that occur when developing the Carleman matrix at different points of the original function.
So the isexp@0.5 is perhaps not equal to the isexp@-1 and may hence also have different singularities, but perhaps they have very close singularities, who knows.

But in this line: When you continued the islog to the right (possibly several times), what happened with the convergence radius? Did it shrink? Are there more singularities of the islog (note that I previously talked about the singularities of the isexp) than just both primary fixed points?
I think in Henrici "Applied and computational complex analysis" there are error bounds for the continuation. If I remember right the more iterative steps you do for the continuation the more the size of the original development must increase. But then you can get the transposed coefficients inside arbitrary error bounds.
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  A discussion with pictures of the set of fixed- and n-priodic points of the exp() Gottfried 0 47 05/15/2020, 12:10 PM
Last Post: Gottfried
  Roots of z^z^z+1 (pictures in MSE) Gottfried 3 546 02/09/2020, 03:11 AM
Last Post: Daniel



Users browsing this thread: 1 Guest(s)