Borel summation
#11
(08/30/2022, 02:30 AM)JmsNxn Wrote: \[
**\begin{align}
\int_0^\infty e^{-x} F(\sqrt{t}x)\,dx &= \sum_{k=0}^\infty b_k t^k\\
&= t^{-1/2}\int_0^\infty e^{-ut^{-1/2}} F(u)\,du
\end{align}
\]

That's a relief, James, this formula finally works very well numerically!
This is the function under the integral t=2:
   
And at a first glance also gives very good results for the half-iterate.
#12
(08/30/2022, 08:45 AM)bo198214 Wrote:
(08/30/2022, 02:30 AM)JmsNxn Wrote: \[
**\begin{align}
\int_0^\infty e^{-x} F(\sqrt{t}x)\,dx &= \sum_{k=0}^\infty b_k t^k\\
&= t^{-1/2}\int_0^\infty e^{-ut^{-1/2}} F(u)\,du
\end{align}
\]

That's a relief, James, this formula finally works very well numerically!
This is the function under the integral t=2:

And at a first glance also gives very good results for the half-iterate.

YES!

I plan to do a write up soon; I'll add some more flavour to that formula. We've opened the mellin transform/fourier transform/laplace transform flood gates!!!

Also, note that this form of the answer is much better for \(t \approx 0\), and in the attracting petal. We want \(e^{-ut^{-1/2}}\) to be as small as possible--and \(t\) to be within the region of the asymptotic expansion. To make this work "globally" is trickier, because we need to analytically continue the laplace transform--which sounds harder than it is Big Grin
#13
So I just thought of something off hand, and it determines how I parameterize things. All of these cases are identical, but it helps me to choose my language if you can find one more reasonable. We can take \(3\) \(F\) functions. Let's define:

\[
F_1(x) = \sum_{k=0}^\infty b_k \frac{x^k}{k!^2}\\
\]

\[
F_2(x) = \sum_{k=0}^\infty b_k \frac{x^{2k}}{2k!}
\]

Or; we can go full Bessel function:

\[
F_3^v(x) = \sum_{k=0}^\infty b_k \frac{x^{2k + v}}{2^{2k+v}k!\Gamma(1+k+v)}
\]


 Each of these forms always produce the same "borel sum"--but they do so with different flavour's of integral transforms. The first one we can write as an entire function, and we can prove that the coefficients, create a new function, which satisfies appropriate bounds; and therefore an integral transform exists. (This is how I wrote my original mellin transform proof that \(b_k = O(c^kk!)\), for Gottfried's problem). The second one is what I just posted. The third one is definitely the most interesting, and it's how I attempted to first solve this problem.

Personally, I think we should go the Bessel function route, because we get a whole bunch of dope ass \(\Gamma\) functions everywhere; and Gamma functions are super well behaved and studied; they're like a comfy home. Plus! We get to work with \(\Gamma\) functions.

This also let's us have the best behaved Mellin transforms. Laplace transforms are great, but they are actually more restrictive than Mellin transforms. Laplace transforms are easier to perform; but they tend to only be interesting when you actually approach the boundary of what converges/what doesn't. Mellin transforms only talk about that boundary. No one Laplace transforms a zeta function (which you can analytically continue the \(\zeta\) function using borel sums). It takes ten minutes to describe the zeta function acted on by a laplace transform. There's a reason we look at the mellin transform. They have more complex rules. Laplace is good for differential stuff--but we have no differential stuff here.

I think we need to be looking at Euler's expression:

\[
\int_0^\infty \frac{e^{-t}}{1+tz}\,dt = \sum_{k=0}^\infty (-1)^k k! z^k\\
\]

And just as much, Euler's expression:

\[
\int_0^\infty e^{-t}t^{z-1}\,dt = \Gamma(z)\\
\]

And I think we need to draw our basis there. Rather than solely relying on Laplace transforms (which are really just the poorman's mellin transform, lol).

So, I think if we want to really discuss this problem mathematically, we're going to have to refer heavily to mellin transforms. Despite Laplace transforms "seeming easier" and additionally being "more efficient".


THE MELLIN TRANSFORM WILL BE NEEDED TO EXPAND THIS THING GLOBALLY AND/OR NEAR THE BOUNDARY!
#14
I let my computer run over night to calculate 1295 coefficients of the \(e^x-1\) half iterate.
But even with the new 2k-factorial, one has to be very careful about which t is suitable for calculating.
E.g. with 
\[ B(t,u) = e^{-u}\sum_{k=0}^{1295} \frac{c_k}{(2k)!}t^k u^{2k} \]
and t=0.05 one has dangerous values like
\(B(0.05,1500)=-3.50337267212933e600\)
Only for t=0.01 I seem to have small values which I checked up to u=1000000.
Which means we only can directly use Borel summation for values \(|z|\le 0.01\), so I did this for the half-iterate \(h\) of \(f(z)=e^z-1\).
Comparing \(h(h(z))-f(z)\) on the circle with radius \(|z|=0.01\) gave a precision of around 17 digits.

This sounds not bad, however if we compare the the positive half-iterate \(h_+\) and the negative half-iterate \(h_+\) - calculated with traditional means - we get:
\begin{align}
h_-(0.01i)&=-0.000024999999999924043598696143166 + 0.0099999791666927083178232643487\;i\\
h_+(0.01i)&=-0.000024999999999924043599282740330 + 0.0099999791666927083178232889256\;i
\end{align}
i.e. they only differ at the 24th digit ... ! The Borel summed value was just too inprecise to decide whether it is the positive or the negative half-iterate on 0.01i:
\begin{align}
h(0.01i)&=-0.000024999999999924035895201845925 + 0.0099999791666927091010608208421\; i
\end{align}

Very disappointing for a whole night's computation ...
#15
(09/12/2022, 06:07 PM)bo198214 Wrote: I let my computer run over night to calculate 1295 coefficients of the \(e^x-1\) half iterate.
But even with the new 2k-factorial, one has to be very careful about which t is suitable for calculating.
E.g. with 
\[ B(t,u) = e^{-u}\sum_{k=0}^{1295} \frac{c_k}{(2k)!}t^k u^{2k} \]
and t=0.05 one has dangerous values like
\(B(0.05,1500)=-3.50337267212933e600\)
Only for t=0.01 I seem to have small values which I checked up to u=1000000.
Which means we only can directly use Borel summation for values \(|z|\le 0.01\), so I did this for the half-iterate \(h\) of \(f(z)=e^z-1\).
Comparing \(h(h(z))-f(z)\) on the circle with radius \(|z|=0.01\) gave a precision of around 17 digits.

This sounds not bad, however if we compare the the positive half-iterate \(h_+\) and the negative half-iterate \(h_+\) - calculated with traditional means - we get:
\begin{align}
h_-(0.01i)&=-0.000024999999999924043598696143166 + 0.0099999791666927083178232643487\;i\\
h_+(0.01i)&=-0.000024999999999924043599282740330 + 0.0099999791666927083178232889256\;i
\end{align}
i.e. they only differ at the 24th digit ... ! The Borel summed value was just too inprecise to decide whether it is the positive or the negative half-iterate on 0.01i:
\begin{align}
h(0.01i)&=-0.000024999999999924035895201845925 + 0.0099999791666927091010608208421\; i
\end{align}

Very disappointing for a whole night's computation ...

Thanks for the effort though !


regards

tommy1729


Possibly Related Threads…
Thread Author Replies Views Last Post
  Borel summation, Mellin Transforms, Parabolic iteration JmsNxn 5 2,166 09/10/2022, 03:12 PM
Last Post: bo198214
  The summation identities acgusta2 2 9,949 10/26/2015, 06:56 AM
Last Post: acgusta2
  Developing contour summation JmsNxn 3 10,121 12/13/2013, 11:40 PM
Last Post: JmsNxn
  Borel summation and other continuation/summability methods for continuum sums mike3 2 10,237 12/30/2009, 09:51 PM
Last Post: mike3



Users browsing this thread: 1 Guest(s)