08/03/2014, 12:06 PM
(This post was last modified: 08/15/2014, 09:54 PM by sheldonison.)

(08/03/2014, 08:46 AM)tommy1729 Wrote:(08/03/2014, 04:54 AM)sheldonison Wrote:(08/02/2014, 11:48 PM)tommy1729 Wrote: ....

f(x) is the half-iterate from the sinh method.

I assume since sinh is an odd function, that f(x), the asymptotic half iterate, would also be an odd function, though this is not required. As I remember, the closest singularity to the origin for sinh^{0.5} is on the imaginary axis.

For f(x) asymptotic to exp^{0.5}, the branch cut is on the negative real axis. For the asymptotic to sinh^{0.5}, one possible branch cut is on the imaginary axis, in both directions, which would lead to an odd function, with even Taylor series coefficients=0. Is this what you had in mind?

Just as you use kneser in post 9 , I use the 2sinh here.

So just for numerical reasons.

Im still on the real line.

It is convenient since it satisfies ln(f(exp(x))) = f(x) what simplifies the equations.

regards

tommy1729

post#16, the Gaussian approximation would work for the 2sinh method. For the Gaussian approximation for exp^{0.5}, the error term for the ratio to the "true" Taylor series coefficient varies from 0.02 for the a1 coefficient, falling to 0.00048 for a20, and falling to 0.000018 for the a300. For large positive numbers, 2sinh^{0.5} behaves similarly to exp^{0.5}, but in the complex plane, the similarity goes away as you approach the negative real axis.

I wonder if I can come up with a general equation, for interpolating f(x). It would be something along the lines of

conveniently, for

Then would be defined as the derivative of g, and its inverse would be h(z), where h(n) would be the optimal "numerical" point to calculate the nth derivative. If g(x) is real valued at pi i, you have the trivial case, and you take the integral from -pi i to +pi i. Otherwise, find the real minimum of g(y+iz), and that is what you use to take the integral, from post#70. Of course, zeros of f(z) get in the way, as do singularities of f(z).... But in some cases, like exp^{0.5}, the integral converges and all of the Taylor series coefficients converge and are defined as y goes to infinity.

Here is the generalized equation, modified, from post#70

If there are zeros, or if the limit does not converge as n goes to invinity, then we go back to using y=h(n) which would be defined. h(n) is the inverse of the derivative of g(z), and is an optimal value to use for y for the nth derivative. It would be interesting to analyze how the integral behaves for other functions, like 2sinh, or maybe even tet(z), but if it is not well behaved then we have the following, where the exp cancels the logarithmic singularities dues to zeros in g.

- Sheldon