Bessel functions and the iteration of $$e^z -1$$ JmsNxn Ultimate Fellow Posts: 985 Threads: 117 Joined: Dec 2010 08/19/2022, 06:14 AM (This post was last modified: 08/19/2022, 07:34 AM by JmsNxn.) So, this is something I've come across which includes 3 things we've all seen, and one new thing. That new one thing being Bessel functions. I will remind the reader that the Bessel functions: $$J_v(x) = \sum_{k=0}^\infty \frac{(-1)^k}{k!\Gamma(1+k+v)} \left( \frac{x}{2}\right)^{2k+v}\\$$ We only care about the $$v = 0$$ version, by which: $$J_0(\sqrt{2x}) = \sum_{k=0}^\infty \frac{(-1)^k x^k}{k!^2}\\$$ This function, has the awesome property that: $$\Lambda(s) = \int_0^\infty J_0(\sqrt{2x})x^{s-1}\,dx\\$$ Which is analytically continuable to: $$\Lambda(s) = \sum_{k=0}^\infty \frac{(-1)^k}{k!^2(s+k)} + \int_1^\infty J_0(\sqrt{2x})x^{s-1}\,dx\\$$ Which is meromorphic for $$\Re(s) < \delta$$, which is found from the asymptotic that $$J_0(\sqrt{2x})$$ is bounded by $$x^{-\delta}$$. We're going to start with the function $$g(x)$$ such that $$g : (-\infty, 0] \to (-\infty , 0]$$ and that $$g(g(x)) = e^{x} -1$$. And we are going to notice instantly that $$e^{-\infty} - 1 = -1$$, and thereby, $$g(-\infty) = g^{-1}(-1)$$, and therefore is a constant because $$g$$ is injective. By which we have $$g$$ is a bounded function. Therefore: $$\int_0^\infty g(-x)x^{s-1}\,dx\\$$ Converges for $$-1 < \Re(s) < 0$$ because $$g(x) \sim x$$ as $$x \to 0$$. This function equals the mellin transform, which we'll write: $$\partial g(s) = \frac{1}{\Gamma(s)}\int_0^\infty g(-x)x^{s-1}\,dx\\$$ Where beautifully, if we take the asymptotic expansion of $$g$$ about $$0$$, we get that: $$\partial g(-k) = k! g_k\\$$ Where $$g_k$$ are the coefficients of the asymptotic expansion $$g(x) = \sum_{k=0}^\infty g_k x^k$$ (Gottfried's idea from before). We can analytically continue $$\partial g(s)$$ to $$\Re(s) < 0$$ (using standard residue arguments) and here is where the fun begins. Using the Fourier transform, where we write it for $$-1 < c < 0$$, we have: $$\frac{1}{2 \pi i} \int_{c-i\infty}^{c+i\infty} \Gamma(s) \partial g(s) x^{-s} \,ds = g(x) = \sum_{k=1}^\infty g_k x^k\\$$ Which is valid for $$\Re(x) < 0$$. Now we can totally change the game by introducing Bessel functions. If I write: $$\frac{1}{2 \pi i} \int_{c-i\infty}^{c+i\infty} \Lambda(s) \partial g(s) x^{-s} \,ds = \mathcal{B}g(x) = \sum_{k=1}^\infty g_k \frac{x^k}{k!}\\$$ We are now asking that this object converges for $$x \approx 0$$. And that this is not an asymptotic series. Which ultimately shows that $$g_k = O(c^k k!)$$. Well, wouldn't you know that the Bessel function's mellin transform $$\Lambda(s)$$ is a standard kind of function: $$\Lambda(s) = \frac{\Gamma(s)}{\Gamma(1-s)}\\$$ So I have reduced  Gottfried's problem into showing that: For $$-1 < c < 0$$ The following function is holomorphic for $$|x| < \delta$$ for some $$\delta > 0$$: $$\sum_{k=1}^\infty g_k \frac{x^k}{k!} = \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \frac{\Gamma(s)}{\Gamma(1-s)} \partial g(s) x^{-s}\,ds\\$$ Where: $$\partial g(s) = \frac{1}{\Gamma(s)}\int_0^\infty g(-x)x^{s-1}\,dx\\$$ Now to solve this problem, we note instantly that $$\partial g(s)$$ is bounded in the left half plane. And that $$\Gamma(s)$$ cancels out $$\Gamma(1-s)$$. Where at best we are left with a decay like $$1/|\Im(s)|^{1+\delta}$$. This would follow from standard bounds on Mellin transforms, and Gamma function asymptotics (A la Sterling). This integral absolutely converges for $$|x| < \delta$$, and does so uniformly. Showing that Gottfried's coefficients are $$O(c^k k!)$$. EDIT: Okay so I worked the actual asymptotics out for $$\Lambda$$. But: $$|\Gamma(x+iy)| \sim \sqrt{2\pi} |y|^{x-\frac{1}{2}}e^{-\frac{1}{2}\pi |y|}\\$$ There by, if we choose $$-1 < c < 0$$, then: $$\left|\frac{\Gamma(c+iy)}{\Gamma(1-c-iy)}\right| \sim \frac{|y|^{c-\frac{1}{2}}}{|y|^{1-c-\frac{1}{2}}} \sim |y|^{2c-1}\\$$ Therefore, so long as $$c < 0$$ the above integral converges. If we have that $$\partial g(s)$$ is bounded in the left half plane, which is provable using a similar argument. JmsNxn Ultimate Fellow Posts: 985 Threads: 117 Joined: Dec 2010 08/21/2022, 12:17 AM (This post was last modified: 08/21/2022, 05:42 AM by JmsNxn.) Okay, let's start this over with more rigor. By which I'll work in steps. We begin by denoting $$f(z) = e^{z}-1$$ which has the neutral fixed point at $$0$$ with multiplier $$1$$. By which we can construct an Abel function in the left half plane $$\Re(z) < 0$$, because this domain is within the attracting petal of $$0$$. Which is to mean that $$\lim_{n\to\infty}f^{\circ n}(z) = 0$$ for all $$\Re(z) < 0$$. This is seen by just observing the orbit. The abel function $$\alpha(z)$$ is holomorphic in the Left half plane, and satisfies: $$\alpha(e^z-1) = \alpha(z) + 1\\$$ By which we have the identity $$\alpha(z+2\pi i) = \alpha(z)$$--it must inherit $$f$$'s period. We can write: $$\alpha(z) = \alpha(f^{\circ n}(z)) - n\\$$ Which ensures the expansion to the left half plane. This function is holomorphic in a petal near zero; by which it is non-singular; and we can take an inverse function, which we write as $$\alpha^{-1}(z)$$, which satisfies $$f(\alpha^{-1}(z)) = \alpha^{-1}(z+1)$$. We can construct a holomorphic squareroot function of $$f$$, which we write: $$g(z) = \alpha^{-1}\left(\frac{1}{2} + \alpha(z)\right)\\$$ As per David E. Speyer's comment, this function is extendable to $$\mathbb{C}/[0,\infty)$$--as per referenced here https://mathoverflow.net/questions/4347/...ar-and-exp  Which is stated as a result of Baker (too lazy to find an English copy of the paper, the original is in German). This function then has two properties we are interested in. Firstly, $$g : \mathbb{C}_{\Re(z) < 0} \to \mathbb{C}_{\Re(z) < 0}$$, where additionally, when we assign the point at infinity in this space, we get that: $$g(\infty) = g^{-1}(-1)\\$$ Because, on this domain, $$f(\infty) = -1$$. The function $$g$$ is non-singular, so a local inverse always exists. The second thing we need is that this function is bounded on the line $$i\mathbb{R}$$. This can be found because $$g(g(i\mathbb{R})) = f(i\mathbb{R})$$ is bounded; and by part so much be the first arc $$g(i\mathbb{R})$$. So we can strenghten our statement to $$g(z)$$ is bounded for $$\Re(z) \le 0$$. Now the function $$g$$ is not holomorphic at $$0$$, but can be expanded into an asymptotic series we will write as: $$g(z) \sim \sum_{k=1}^\infty g_k z^k\\$$ And the goal we are trying to show, is that there is some $$0 < c$$ such that $$g_k = O(c^kk!)$$. Or equivalently, that: $$\mathcal{B} g(z) = \sum_{k=1}^\infty g_k \frac{z^k}{k!}\\$$ Has a non trivial radius of convergence. ---------------------------------------- The first theorem we'll write is as follows: Theorem: $$F(z) = \int_0^\infty g(-x)x^{z-1}\,dx\\$$ Converges for $$-1 < \Re(z) < 0$$ and has decay like: $$|F(x + iy)| < M e^{-\frac{\pi}{2}|y|}\\$$ For some constant $$M$$. Proof: The integral: $$F_\theta(z) = \int_0^\infty g(-e^{i\theta} x) (e^{i\theta}x)^{z-1}e^{i\theta}\,dx\\$$ converges for all $$-\pi /2 \le \theta \le \pi/2$$ and $$-1 < \Re(z) < 0$$. The integral converges at the endpoin $$0$$, because $$g(z) \sim g_1 z$$ at $$0$$, and the integral converges at $$\infty$$, because $$g(-e^{i\theta}x)$$ is bounded and the integrand is bounded by $$|x|^{\Re(z) - 1}$$. Now, by contour integration, we have that $$F_\theta$$ is constant in $$\theta$$. To see this, consider the contour: $$C = [0,R] + \gamma_R - [0,Re^{i\theta}]\\$$ Then: $$\int_C g(x) x^{z-1}\,dx\\ = 0\\$$ The integral along the arc $$\gamma_R$$ is bounded by $$|R|^z$$, and therefore the two integrals: $$\int_0^\infty g(x)x^{z-1}\,dx - \int_0^\infty g(-e^{i\theta} x)(e^{i\theta}x)^{z-1} e^{i\theta}\,dx = 0\\$$ Now consider $$y = \Im(z) > 0$$, and $$\theta = \pi/2$$. We have that: $$F(x+iy) = e^{i\frac{\pi}{2}(x+iy)}\int_0^\infty g(ix)x^{z-1}\,dx\\$$ But this is bounded as $$|F(x+iy) | \le M^+ e^{-\frac{\pi}{2} y}$$. A similar procedure can be done for $$\Im(z) < 0$$ and $$\theta = - \pi/2$$, and we are given: $$|F(z)| \le M e^{-\frac{\pi}{2} |y|}\\$$ Where $$M$$ depends on the real part of $$z$$ only. This is a Stein & Shakarchi argument, which they develop using the Fourier transform; this is just a change of variables of this. We point to Stein & Shakarchi Complex Analysis. QED --------------------------------------- Our second theorem will be that $$F(z)$$ is actually meromorphic for $$\Re(z) < 0$$. This is a bit more tricky. Theorem: The function $$F(z)$$ is meromorphic for $$\Re(z) < 0$$ with simple poles at $$k \in \mathbb{Z}_{<0}$$, with residues $$(-1)^k g_k$$. Proof: This is more of a just check the result. I could go by induction but I want to be quick. Begin by expanding the integral defining $$F$$ as follows: $$F(z) = \int_0^1 g(-x)x^{z-1}\,dx + \int_1^\infty g(-x)x^{z-1}\,dx\\$$ You will note instantly that the second integral is holomorphic for $$\Re(z) < 0$$--while the first integral is only holomorphic for $$\Re(z) > -1$$. So let's add in our asymptotic expansion, which is given as: $$\int_0^1 \left(g(-x) - \sum_{k=1}^N g_k (-x)^k + \sum_{k=1}^N g_k (-x)^k\right) x^{z-1}\,dx\\$$ Where the term by term integral of this can be written: $$\int_0^1 g(-x)x^{z-1} \,dx = \sum_{k=1}^N g_k\frac{(-1)^k}{k+z} + \int_0^1\left(g(-x) - \sum_{k=1}^N g_k (-x)^k\right) x^{z-1}\,dx\\$$ But now the integral on the right has an $$N+1$$'th order zero at $$0$$, and therefore the integral on the left is meromorphic for $$\Re(z) > -N$$. Combining this together, and since $$N$$ is arbitrary. We have that $$F(z)$$ is holomorphic for $$\Re(z) < 0$$, and has simple poles at $$z = -k$$, with residues: $$\mathop{\mathrm{Res}}_{z=-k} F(z) = (-1)^k g_k\\$$ QED ----------------------------------------------- Now we will place here a theorem on Gamma functions asymptotics in the complex plane. Partially attributed to Stirling, and named for him. $$\Gamma(z) \sim \sqrt{2\pi}z^{z-1/2}e^{-z}\,\,\text{as}\,\,|z| \to \infty\,\,\text{while}\,\,|\arg(z)| < \pi\\$$ Which, for our purposes, in the imaginary asymptotic, can be written: $$|\Gamma(x+iy)| \sim \sqrt{2\pi} |y|^{x-1/2} e^{-\frac{\pi}{2}|y|}\,\,\text{as}\,\,|y|\to\infty\\$$ Of which, we get that: $$\frac{|F(x+iy)|}{|\Gamma(1-x-iy)|} \le M |y|^{x-1+1/2}\\$$ Which gives our third theorem: Theorem: Let $$-1 < c <-1/2$$, then the integral: $$h(x) = \frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty} \frac{F(z)}{\Gamma(1-z)}x^{-z}\,dz\\$$ Converges absolutely, and is continuous for $$x > 0$$. Proof: Just take absolute values and use the above asymptotics, and note that $$|x^{-z}| < |x|^{-\Re(z)}$$. QED ----------------------------- And now we enter in the hard part!!! Essentially, we wish to show that $$h(x) = \mathcal{B} g(-x)$$. And to do that we need a lemma that is deceptively simple, but may be confusing. $$h(x) = \sum_{k=1}^\infty \mathop{\mathrm{Res}}_{z=-k}\frac{F(z)}{\Gamma(1-z)}x^{-z}\\$$ To calculate these residues isn't very hard. We know that $$F(z)$$ has a simple pole at each $$k$$, with a residue of $$(-1)^kg_k$$, so by running Cauchy's integral formula we get: $$\mathop{\mathrm{Res}}_{z=-k}\frac{F(z)}{\Gamma(1-z)}x^{-z} = g_k \frac{(-x)^k}{k!}\\$$ Which is indeed the expansion of $$\mathcal{B}g(-x)$$. The trouble is, we need to show that the contour integral satisfies equalling these residues. We write this below: Theorem: The function $$h(x)$$ from the previous theorem, satisfies: $$h(x) = \sum_{k=1}^\infty g_k \frac{(-x)^k}{k!}\\$$ For a value $$\delta > 0$$ and \(0 0.  B is modified borel , L is modified laplace  lets see  LB(g(x)) = modified borel summation g(x) =  integral from 0 to +oo  exp(-v t) B(g(x t) ) dt this integral converges IFF B(g(x t)) is bounded by C exp(v t). Now g(x t) =< exp(x t) - 1. also for some v : B( g(x t) ) =< g(x t)  thus B ( g(x t) ) =< exp(x t) - 1. so if  exp(x t) - 1 < C exp(v t) then  B(g(x t)) is bounded by C exp(v t) as desired. In general beyond a certain size , the larger v is the smaller B( g(x t) ) becomes and the larger C exp(v t). at the boundary we have B(g(x t)) is asymptotic to C exp(v t). which holds as v and x are about equal , lets call that positive number S. so if x < S then B(g(x t)) is bounded by C exp(v t) as desired. and that x implies a positive radius for B( g(x) ). Also B( g(x t) ) for x > S is still smaller than exp(x t) -1. hence  B ( g(x t) ) =< exp(x t) - 1. and exp(x t) - 1 < C exp(v t) for some C EQUAL OR LARGER THAN 1. if x < v or x = v. now take  v/10 < x < v this works. QED someting like that. thing is that B ( g(x t) ) might still be a divergent series at 0. however when considered as expanded at x t > 0 then g or B(g) are not divergent anymore ! This however requires to show B (g(A)) = B(g(Z)) in other words the expansion point of B(g) should not matter and be the same taylor... by continuation. this is true for all expansion points larger than 0 because analytic functions have analytic borels. but at 0 it is problematic. this is somewhat running in circles ... need to think regards tommy1729 « Next Oldest | Next Newest »

 Possibly Related Threads… Thread Author Replies Views Last Post The iterational paradise of fractional linear functions bo198214 7 471 08/07/2022, 04:41 PM Last Post: bo198214 Uniqueness of fractionally iterated functions Daniel 7 729 07/05/2022, 01:21 AM Last Post: JmsNxn Fractional iteration of x^2+1 at infinity and fractional iteration of exp bo198214 17 30,241 06/11/2022, 12:24 PM Last Post: tommy1729 The weird connection between Elliptic Functions and The Shell-Thron region JmsNxn 1 708 04/28/2022, 12:45 PM Last Post: MphLee Using a family of asymptotic tetration functions... JmsNxn 15 7,119 08/06/2021, 01:47 AM Last Post: JmsNxn The AB functions ! tommy1729 0 3,603 04/04/2017, 11:00 PM Last Post: tommy1729 Look-alike functions. tommy1729 1 4,732 03/08/2016, 07:10 PM Last Post: hixidom Inverse power tower functions tommy1729 0 3,854 01/04/2016, 12:03 PM Last Post: tommy1729 [2014] composition of 3 functions. tommy1729 0 3,629 08/25/2014, 12:08 AM Last Post: tommy1729 Intresting functions not ? tommy1729 4 10,311 03/05/2014, 06:49 PM Last Post: razrushil

Users browsing this thread: 1 Guest(s)