Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Bessel functions and the iteration of \(e^z -1 \)
#1
So, this is something I've come across which includes 3 things we've all seen, and one new thing. That new one thing being Bessel functions.

I will remind the reader that the Bessel functions:

$$
J_v(x) = \sum_{k=0}^\infty \frac{(-1)^k}{k!\Gamma(1+k+v)} \left( \frac{x}{2}\right)^{2k+v}\\
$$

We only care about the \(v = 0\) version, by which:

$$
J_0(\sqrt{2x}) = \sum_{k=0}^\infty \frac{(-1)^k x^k}{k!^2}\\
$$

This function, has the awesome property that:

$$
\Lambda(s) = \int_0^\infty J_0(\sqrt{2x})x^{s-1}\,dx\\
$$

Which is analytically continuable to:

$$
\Lambda(s) = \sum_{k=0}^\infty \frac{(-1)^k}{k!^2(s+k)} + \int_1^\infty J_0(\sqrt{2x})x^{s-1}\,dx\\
$$

Which is meromorphic for \(\Re(s) < \delta\), which is found from the asymptotic that \(J_0(\sqrt{2x})\) is bounded by \(x^{-\delta}\).




We're going to start with the function \(g(x)\) such that \(g : (-\infty, 0] \to (-\infty , 0]\) and that \(g(g(x)) = e^{x} -1\). And we are going to notice instantly that \(e^{-\infty} - 1 = -1\), and thereby, \(g(-\infty) = g^{-1}(-1) \), and therefore is a constant because \(g\) is injective. By which we have \(g\) is a bounded function. Therefore:

$$
\int_0^\infty g(-x)x^{s-1}\,dx\\
$$

Converges for \(-1 < \Re(s) < 0\) because \(g(x) \sim x\) as \(x \to 0\). This function equals the mellin transform, which we'll write:

$$
\partial g(s) = \frac{1}{\Gamma(s)}\int_0^\infty g(-x)x^{s-1}\,dx\\
$$

Where beautifully, if we take the asymptotic expansion of \(g\) about \(0\), we get that:

$$
\partial g(-k) = k! g_k\\
$$

Where \(g_k\) are the coefficients of the asymptotic expansion \(g(x) = \sum_{k=0}^\infty g_k x^k\) (Gottfried's idea from before).

We can analytically continue \(\partial g(s)\) to \(\Re(s) < 0\) (using standard residue arguments) and here is where the fun begins. Using the Fourier transform, where we write it for \( -1 < c < 0\), we have:

$$
\frac{1}{2 \pi i} \int_{c-i\infty}^{c+i\infty}  \Gamma(s) \partial g(s) x^{-s} \,ds = g(x) = \sum_{k=1}^\infty g_k x^k\\
$$

Which is valid for \(\Re(x) < 0\).




Now we can totally change the game by introducing Bessel functions. If I write:

$$
\frac{1}{2 \pi i} \int_{c-i\infty}^{c+i\infty}  \Lambda(s) \partial g(s) x^{-s} \,ds = \mathcal{B}g(x) = \sum_{k=1}^\infty g_k \frac{x^k}{k!}\\
$$

We are now asking that this object converges for \(x \approx 0\). And that this is not an asymptotic series. Which ultimately shows that \(g_k = O(c^k k!)\). Well, wouldn't you know that the Bessel function's mellin transform \(\Lambda(s)\) is a standard kind of function:

$$
\Lambda(s) = \frac{\Gamma(s)}{\Gamma(1-s)}\\
$$



So I have reduced  Gottfried's problem into showing that:

For \(-1 < c < 0\) The following function is holomorphic for \( |x| < \delta\) for some \(\delta > 0\):

$$
\sum_{k=1}^\infty g_k \frac{x^k}{k!} = \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \frac{\Gamma(s)}{\Gamma(1-s)} \partial g(s) x^{-s}\,ds\\
$$

Where:

$$
\partial g(s) = \frac{1}{\Gamma(s)}\int_0^\infty g(-x)x^{s-1}\,dx\\
$$



Now to solve this problem, we note instantly that \(\partial g(s)\) is bounded in the left half plane. And that \(\Gamma(s)\) cancels out \(\Gamma(1-s)\). Where at best we are left with a decay like \(1/|\Im(s)|^{1+\delta}\). This would follow from standard bounds on Mellin transforms, and Gamma function asymptotics (A la Sterling).

This integral absolutely converges for \(|x| < \delta\), and does so uniformly. Showing that Gottfried's coefficients are \(O(c^k k!)\).


EDIT:

Okay so I worked the actual asymptotics out for \(\Lambda\).

But:

$$
|\Gamma(x+iy)| \sim \sqrt{2\pi} |y|^{x-\frac{1}{2}}e^{-\frac{1}{2}\pi |y|}\\
$$

There by, if we choose \(-1 < c < 0\), then:

$$
\left|\frac{\Gamma(c+iy)}{\Gamma(1-c-iy)}\right| \sim \frac{|y|^{c-\frac{1}{2}}}{|y|^{1-c-\frac{1}{2}}} \sim |y|^{2c-1}\\
$$

Therefore, so long as \(c < 0\) the above integral converges. If we have that \(\partial g(s)\) is bounded in the left half plane, which is provable using a similar argument.
Reply
#2
Okay, let's start this over with more rigor. By which I'll work in steps.

We begin by denoting \(f(z) = e^{z}-1\) which has the neutral fixed point at \(0\) with multiplier \(1\). By which we can construct an Abel function in the left half plane \(\Re(z) < 0\), because this domain is within the attracting petal of \(0\). Which is to mean that \(\lim_{n\to\infty}f^{\circ n}(z) = 0\) for all \(\Re(z) < 0\). This is seen by just observing the orbit.

The abel function \(\alpha(z)\) is holomorphic in the Left half plane, and satisfies:

$$
\alpha(e^z-1) = \alpha(z) + 1\\
$$

By which we have the identity \(\alpha(z+2\pi i) = \alpha(z)\)--it must inherit \(f\)'s period. We can write:

$$
\alpha(z) = \alpha(f^{\circ n}(z)) - n\\
$$

Which ensures the expansion to the left half plane. This function is holomorphic in a petal near zero; by which it is non-singular; and we can take an inverse function, which we write as \(\alpha^{-1}(z)\), which satisfies \(f(\alpha^{-1}(z)) = \alpha^{-1}(z+1)\). We can construct a holomorphic squareroot function of \(f\), which we write:

$$
g(z) = \alpha^{-1}\left(\frac{1}{2} + \alpha(z)\right)\\
$$

As per David E. Speyer's comment, this function is extendable to \(\mathbb{C}/[0,\infty)\)--as per referenced here https://mathoverflow.net/questions/4347/...ar-and-exp  Which is stated as a result of Baker (too lazy to find an English copy of the paper, the original is in German).

This function then has two properties we are interested in. Firstly, \(g : \mathbb{C}_{\Re(z) < 0} \to \mathbb{C}_{\Re(z) < 0}\), where additionally, when we assign the point at infinity in this space, we get that:

$$
g(\infty) = g^{-1}(-1)\\
$$

Because, on this domain, \(f(\infty) = -1\). The function \(g\) is non-singular, so a local inverse always exists. The second thing we need is that this function is bounded on the line \(i\mathbb{R}\). This can be found because \(g(g(i\mathbb{R})) = f(i\mathbb{R})\) is bounded; and by part so much be the first arc \(g(i\mathbb{R})\).

So we can strenghten our statement to \(g(z)\) is bounded for \(\Re(z) \le 0\).

Now the function \(g\) is not holomorphic at \(0\), but can be expanded into an asymptotic series we will write as:

$$
g(z) \sim \sum_{k=1}^\infty g_k z^k\\
$$

And the goal we are trying to show, is that there is some \(0 < c \) such that \(g_k = O(c^kk!)\). Or equivalently, that:

$$
\mathcal{B} g(z) = \sum_{k=1}^\infty g_k \frac{z^k}{k!}\\
$$

Has a non trivial radius of convergence.

----------------------------------------

The first theorem we'll write is as follows:

Theorem:

$$
F(z) = \int_0^\infty g(-x)x^{z-1}\,dx\\
$$

Converges for \( -1 < \Re(z) < 0\) and has decay like:

$$
|F(x + iy)| < M e^{-\frac{\pi}{2}|y|}\\
$$

For some constant \(M\).

Proof:

The integral:

$$
F_\theta(z) = \int_0^\infty g(-e^{i\theta} x) (e^{i\theta}x)^{z-1}e^{i\theta}\,dx\\
$$

converges for all \(-\pi /2 \le \theta \le \pi/2\) and \(-1 < \Re(z) < 0\). The integral converges at the endpoin \(0\), because \(g(z) \sim g_1 z\) at \(0\), and the integral converges at \(\infty\), because \(g(-e^{i\theta}x)\) is bounded and the integrand is bounded by \(|x|^{\Re(z) - 1}\).

Now, by contour integration, we have that \(F_\theta\) is constant in \(\theta\). To see this, consider the contour:

$$
C = [0,R] + \gamma_R - [0,Re^{i\theta}]\\
$$

Then:

$$
\int_C g(x) x^{z-1}\,dx\\ = 0\\
$$

The integral along the arc \(\gamma_R\) is bounded by \(|R|^z\), and therefore the two integrals:

$$
\int_0^\infty g(x)x^{z-1}\,dx - \int_0^\infty g(-e^{i\theta} x)(e^{i\theta}x)^{z-1} e^{i\theta}\,dx = 0\\
$$

Now consider \(y = \Im(z) > 0\), and \(\theta = \pi/2\). We have that:

$$
F(x+iy) = e^{i\frac{\pi}{2}(x+iy)}\int_0^\infty g(ix)x^{z-1}\,dx\\
$$

But this is bounded as \(|F(x+iy) | \le M^+ e^{-\frac{\pi}{2} y}\). A similar procedure can be done for \(\Im(z) < 0\) and \(\theta = - \pi/2\), and we are given:

$$
|F(z)| \le M e^{-\frac{\pi}{2} |y|}\\
$$

Where \(M\) depends on the real part of \(z\) only. This is a Stein & Shakarchi argument, which they develop using the Fourier transform; this is just a change of variables of this. We point to Stein & Shakarchi Complex Analysis.

QED

---------------------------------------

Our second theorem will be that \(F(z)\) is actually meromorphic for \(\Re(z) < 0\). This is a bit more tricky.

Theorem:

The function \(F(z)\) is meromorphic for \(\Re(z) < 0\) with simple poles at \(k \in \mathbb{Z}_{<0}\), with residues \((-1)^k g_k\).

Proof:

This is more of a just check the result. I could go by induction but I want to be quick.

Begin by expanding the integral defining \(F\) as follows:

$$
F(z) = \int_0^1 g(-x)x^{z-1}\,dx + \int_1^\infty g(-x)x^{z-1}\,dx\\
$$

You will note instantly that the second integral is holomorphic for \(\Re(z) < 0\)--while the first integral is only holomorphic for \(\Re(z) > -1\). So let's add in our asymptotic expansion, which is given as:

$$
\int_0^1 \left(g(-x) - \sum_{k=1}^N g_k (-x)^k + \sum_{k=1}^N g_k (-x)^k\right) x^{z-1}\,dx\\
$$

Where the term by term integral of this can be written:

$$
\int_0^1 g(-x)x^{z-1} \,dx = \sum_{k=1}^N g_k\frac{(-1)^k}{k+z} + \int_0^1\left(g(-x) - \sum_{k=1}^N g_k (-x)^k\right) x^{z-1}\,dx\\
$$

But now the integral on the right has an \(N+1\)'th order zero at \(0\), and therefore the integral on the left is meromorphic for \(\Re(z) > -N\).

Combining this together, and since \(N\) is arbitrary. We have that \(F(z)\) is holomorphic for \(\Re(z) < 0\), and has simple poles at \(z = -k\), with residues:

$$
\mathop{\mathrm{Res}}_{z=-k} F(z) = (-1)^k g_k\\
$$

QED

-----------------------------------------------

Now we will place here a theorem on Gamma functions asymptotics in the complex plane. Partially attributed to Stirling, and named for him.

$$
\Gamma(z) \sim \sqrt{2\pi}z^{z-1/2}e^{-z}\,\,\text{as}\,\,|z| \to \infty\,\,\text{while}\,\,|\arg(z)| < \pi\\
$$

Which, for our purposes, in the imaginary asymptotic, can be written:

$$
|\Gamma(x+iy)| \sim \sqrt{2\pi} |y|^{x-1/2} e^{-\frac{\pi}{2}|y|}\,\,\text{as}\,\,|y|\to\infty\\
$$

Of which, we get that:

$$
\frac{|F(x+iy)|}{|\Gamma(1-x-iy)|} \le M |y|^{x-1+1/2}\\
$$

Which gives our third theorem:

Theorem: Let \(-1 < c <-1/2\), then the integral:

$$
h(x) = \frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty} \frac{F(z)}{\Gamma(1-z)}x^{-z}\,dz\\
$$

Converges absolutely, and is continuous for \(x > 0\).

Proof:

Just take absolute values and use the above asymptotics, and note that \(|x^{-z}| < |x|^{-\Re(z)}\).

QED

-----------------------------

And now we enter in the hard part!!! Essentially, we wish to show that \(h(x) = \mathcal{B} g(-x)\). And to do that we need a lemma that is deceptively simple, but may be confusing.

$$
h(x) = \sum_{k=1}^\infty \mathop{\mathrm{Res}}_{z=-k}\frac{F(z)}{\Gamma(1-z)}x^{-z}\\
$$

To calculate these residues isn't very hard. We know that \(F(z)\) has a simple pole at each \(k\), with a residue of \((-1)^kg_k\), so by running Cauchy's integral formula we get:

$$
\mathop{\mathrm{Res}}_{z=-k}\frac{F(z)}{\Gamma(1-z)}x^{-z} = g_k \frac{(-x)^k}{k!}\\
$$

Which is indeed the expansion of \(\mathcal{B}g(-x)\). The trouble is, we need to show that the contour integral satisfies equalling these residues. We write this below:

Theorem:

The function \(h(x)\) from the previous theorem, satisfies:

$$
h(x) = \sum_{k=1}^\infty g_k \frac{(-x)^k}{k!}\\
$$


For a value \(\delta > 0\) and \(0<x < \delta\)

Proof:

Take a contour \(C_R\) such that:

$$
C_R = [c-iR,c+iR] - \gamma_R\\
$$

Where \(\gamma_R\) is the semi-circle to the left of this vertical line. Then we write:

$$
\frac{1}{2\pi i} \int_{C_R} \frac{F(z)}{\Gamma(1-z)} x^{-z}\,dz = \sum \mathop{\mathrm{Res}}\\
$$

Which means it's the sum of the residues of the poles inside the contour. If we can show that:

$$
\lim_{R \to \infty} \int_{\gamma_R} \frac{F(z)}{\Gamma(1-z)} x^{-z}\,dz \to 0\\
$$

By which we get that the integral equals the sum of the residues; and additionally shows that the series converges. To show this, all we need is to show that:

$$
\lim_{|z|\to\infty} \frac{F(z)}{\Gamma(1-z)}x^{-z} \to 0\,\,\text{as}\,\,|z| \to \infty\,\,\text{while}\,\,\Re(z) < 0\\
$$

This needs to be said now, that \(0 < x < \delta\) for arbitrary small, by which the limit has exponential decay as \(\Re(z) \to -\infty\). The limit is immediately true as \(\Im(z) \to \pm \infty\), as we've already shown when discussing the validity of this integral. So all that's left is to show that:

$$
|\frac{F(z)}{\Gamma(1-z)}| = o(x^{z})\,\, \text{as}\,\, \Re(z) \to - \infty\\
$$

Now to solve this we're going to use integration by parts

$$
F(z-1) = \int_0^\infty g(-x)x^{z-2}\,dx = \frac{1}{z-1}\int_0^\infty g'(x)x^{z-1}\,dx\\
$$

And iterating this, we get:

$$
F(z-N) = \frac{1}{(z-1)(z-2)\cdots (z-N)}\int_0^\infty g^{(N)}(x)x^{z-1}\,dx\\
$$

And this clearly has Gamma function growth, where by we must have the appropriate decay.

QED.

--------------------------

Thereby, we've shown that:

$$
\frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \frac{F(z)}{\Gamma(1-z)} x^{-z}\,dz = \sum_{k=1}^\infty g_k \frac{(-x)^k}{k!}\\
$$

For small enough \(0 < x < \delta\). And additionally, since the Residue theorem can be rigorously applied, and the integral converges, we have that the series on the right has a non-trivial radius of convergence.

This is enough to state that \(g_k\) looks like \(O(c^kk!)\) for \(c = \frac{1}{\delta}\).

Which doesn't give as tight a bound as Gottfried wanted, but does supply a bound.


I tried not to muddy any of the details. If you have any questions please ask! I knew Fractional Calculus was good for something, lol.

Big Grin Big Grin Big Grin Big Grin
Reply
#3
I'll remind readers of something additional that is very valuable, the function:

$$
\mathcal{B} g(x) : \mathbb{R}_{<0} \to \mathbb{R}_{<0}\\
$$

And has the expansion:

$$
\mathcal{B} g(-x) = \sum_{k=1}^\infty g_k \frac{(-x)^k}{k!}\\
$$

For \( |x| < \delta\), so it is analytic in a neighborhood of \(x =0\). This means we can write:

$$
\int_0^\infty e^{-t} \mathcal{B} g(-tx)\,dt \sim \sum_{k=1}^\infty (-1)^k g_k \cdot (-x)^k \, \text{as}\,x\to 0\\\\
$$

But, this also equals the iterated function: so that:

$$
\int_0^\infty e^{-t}\mathcal{B} g(tx)\,dt = g(x)\\
$$

For \(\Re(x) < 0\). This effectively shows, that having the existence of the half iterate, is enough for an integral construction. What's even more fascinating, is that:

$$
\mathcal{L} \mathcal{B} g(x) = g(1/x)\\
$$

Where \(\mathcal{L}\) is the Laplace transform. And this construction is solely dependent on \(g\) being bounded in a closed half plane.

There's a much more general theorem at play here. Which is that, if the half plane is invariant, and \(g\) is a contraction mapping on that half-plane--we get a Borel summable expression for said iterate. Adititionally it is nearly completely decided by an asymptotic expansion at \(0\).

If we have an asymptotic series:

$$
g(z) \sim \sum_{k=1}^\infty g_k z^k\\
$$

And we are guaranteed that \(g : \mathcal{H} \to \mathcal{H}\) where \(g\) is bounded, and additionally bounded on the closure of \(\mathcal{H}\) for \(\mathcal{H}\) a half plane. We can do all the heavy lifting from this last result. So this result is certainly definable on a larger domain.

It can probably be done for all fractional iterates about a parabolic point; but I'm not sure how to drop the Half plane requirement. And I think the fact that our original \(g\) sent a half plane to itself is very important--especially in how we define asymptotics in the Mellin transform.

But most (all?) parabolic points (probably just roots of unity multipliers) have an asymptotic series at said fixed point for the iterations. The question is, can they construct an iteration in a half plane from there, (and only within the attracting petal, the repelling petal would require us to look at the inverse).

If there exists a stable iteration \(f^{\circ t_0}(z) : \mathcal{H} \to \mathcal{H}\) with a parabolic point on its boundary (or it's conjugate similar to some iteration like this). I think we can Borel sum much much broader functions.

------------------------------------

I should also, further prove one thing from above.

When I wrote:

(08/21/2022, 12:17 AM)JmsNxn Wrote: ...

And iterating this, we get:

$$
F(z-N) = \frac{1}{(z-1)(z-2)\cdots (z-N)}\int_0^\infty g^{(N)}(x)x^{z-1}\,dx\\
$$

And this clearly has Gamma function growth, where by we must have the appropriate decay.

I realize it might not be clear to you guys how I showed this will grow like \(c^N N!\). The first thing you can note is that:

$$
|g^{(N)}(x)| \le N! |g(x)| p(x)^N\\
$$

for some \(p(x)\). The value \(p(x) \to 0\) as \(x \to \infty\) because the radius of convergence continues to grow when we center about \(x\). So the integral ends up looking like:

$$
F(z-N) \approx \frac{N!}{(z-1)(z-2)\cdots (z-N)}\\
$$

Where:

$$
\Gamma(z) = \lim_{N\to\infty} \frac{N! N^z}{z(z+1)(z+2) \cdots (z+N)}\\
$$

Which, this approximation for \(F(z-N)\) diverges just faster than converging to the gamma function, but of which our function will experience really similar growth. And then the \(\frac{1}{\Gamma(1+N-z)}\) will level everything out. I can further justify this if you'd like, and additionally I can find some sources related to Ramanujan summation and the lot which uses all these techniques.

Edit: This is sort of a prima facie thing in fractional calculus. If we are able to make a Mellin expansion this good, the worst this integral can explode is \(O(N!)\). I'll try to find an old paper I remember that covered these asymptotic expansions--including the theorems I used above. It's somewhere out there.
Reply
#4
Some papers filling in the gaps of my integral arguments are found here. Note they are more geared towards analytic number theory, and more general asymptotic constructions.

You can find that my analytic continuation of \(F\) is a naive idea, compared to what's presented here (which includes my result):

http://pcmap.unizar.es/~chelo/investig/p...sality.pdf

You can also see more generally, that this is related to Mellin convolution (which I didn't bother touching on, because I'm sure it would confuse everyone. I Mellin convoluted to pull out the Borel sum). Also note, this paper even approaches the Analytic Number theory \(\approx\) Quantum physics realm. I apologize, but that's much more my mathematical history.

https://www.sciencedirect.com/science/ar...759500002E

You can download the full pdf to this source from there too. This is more related to Exponential sums and signal theory, but it covers the basics which I tried to speedrun in the beginning of my post. But this helps see some of the strong asymptotics you can get from Mellin transforms. You'll notice if you read to about 10-15 pages in that we start to discuss expansions near \(0\), and how we can better explain the mellin transform.

Also, we have used the amazing book https://www.fing.edu.uy/~cerminar/Complex_Analysis.pdf by Stein & Shakarchi; specifically their description of bounds on Fourier transforms in the complex plane. Which through a variable change, gave us our \(|F(x+iy)| < M e^{-\frac{\pi}{2}|y|}\).

I'll try to find a specific paper that covered much more of the work towards the end of my result, but was phrased in a more restrictive manner. It was a statement that:

$$
F(z-N) \sim c^N N!\\
$$

So the worst this object can grow as \(\Re(z) \to -\infty\) is factorially (minus an exponential); which would mean \(\frac{1}{\Gamma(1-z)}\) would provide the appropriate decay to ensure our Contour integral produced a convergent series.
Reply
#5
I have to admit, I am out here - unfortunately I have zero background in integral transforms (and divergent summability either, lol) - so really can not verify what you wrote - not even with your improved explanations. So I would appeal to maybe MphLee or Leo.W, maybe? ...

But one question I would have: What specifiic properties of \(e^x-1\) did you use in your argumentation? Would it also work with \(xe^x\) or \(x+x^2\)?
Reply
#6
Hey, Bo. I was just as surprised this would work as you.

The main property I used, was mentioned by David E speyer; which is that:

$$
g(z) : \mathbb{C}/[0,\infty) \to \mathbb{C}\\
$$

And which for our purposes was reduced to :

$$
g(z) : \{\Re(z) < 0\} \to \{\Re(z) < 0\}\\
$$

We asked that \(g(z)\) is bounded on this domain as well. And additionally we asked that \(g(i\mathbb{R})\) was bounded.

From there, we asked that \(g(0) = 0\), and additionally had an asymptotic expansion at zero.

This gave us \(F(z)\) is meromorphic for \(\Re(z) < 0\), has simple poles with simple residues \((-1)^k g_k\) when \(z = -k\);  And ensured the decay in the imaginary argument \(F(x+iy) = O(e^{-\frac{\pi}{2}|y|})\).

From here, we are just taking a Fourier transform/Inverse mellin transform of \(F(x+iy)/\Gamma(1-x-iy)\).

The really hard part, I'm not sure about when generalizing, is that \(g^{(N)}(x)\) looks like \(M N! x^{-N}\). This should definitely happen in some form in the general case, but it might be a difficult argument to follow through generally. I kind of drew from fractional calculus here. Which is that if a function is differintegrable; it must grow at worst like the Gamma function. This is probably the only weak point in my proof; but it is correct. It's just not as clean as everything else.



So to answer your question. Can you make a half iterate \(g(g(x)) = xe^x\) that satisfies all of the above? If so, yeah it could work. I think \(e^z-1\) was kind of a perfect storm though. You're going to have to do more legwork for other functions.


EDIT: Personally, I would say that we can always construct a function \(g\) on a petal of \(f\) about a parabolic fixed point; and then we can map that domain to the Left half plane with the fixedpoint on the boundary--whereupon you just do everything I did above; but on a different space of functions.

I mean upto change of variables; I believe this is gneralizable.
Reply
#7
I know you don't like my local iteration idea, but if:

$$
h(z) = \phi(g(\phi^{-1}(z)))\\
$$

Where \(g\) is holomorphic in a half plane, and \(\phi\) maps the half plane to the unit disk \(\mathbb{D}\). If we ask that \(h(1) = 1\), while \(g(0) = 0\)--we can solve for any petal about a parabolic fixed point. We just have to "map to a halfplane".

Petals are always simply connected, if you localize them enough. To the point they look like:

$$
f : \mathbb{D} \to \mathbb{D}\\
$$

And \(f(1) = 1\)--where there are no more fixedpoints within \(\mathbb{D}\). The fixed point lies on the boundary.

-----------------------------------------------

So if we conjugate arbitrary iterations; such that the petal they are holomorphic on, is mapped to \(\mathbb{C}_{\Re(z) < 0}\) where it satisfies the same asymptotics as Gottfrieds \(g\).  Then we're cooking with fire! Cool
Reply
#8
I just wanted to note I found the paper I wanted to find. This explains much of the construction \(F\) and its bounds.

It's a paper by D. Zagier; (when I was searching I was writing Zaszlov (don't know why my brain made that mistake)). This is about the various ways of expanding Mellin transform using asymptotic expansions. It does so cross \(z = \infty\) and \(z=0\)--where both are relatable by the mapping \(z \mapsto 1/z\).

https://people.mpim-bonn.mpg.de/zagier/f...lltext.pdf

It has a lot of flavour of number theory (which is where Mellin transforms appear more naturally)--but this paper acts as a more casual introduction to the theorems I speedrunned.

Honestly; all of my results can be traced to this paper. Zagier does it way better though!


Rereading this paper...

Honestly this paper answers everything. It even goes on to show that the worst these objects can grow are \(O(m!)\)--which means they are borel Summable. This is a beautiful paper and absolutely vital to the discussion at hand. I think it answers everyone of Gottfried's questions.

..............

Oh yeah baby!

Quote:......

In summary, if φ(t) is a function of t with asymptotic expansions as a sum of powers of t (or of
powers of t multiplied by integral powers of log t) at both zero and infinity, then we can define in a
canonical way a Mellin transform  ̃φ(s) which is meromorphic in the entire s-plane and whose poles
reflect directly the coefficients in the asymptotic expansions of φ(t). This definition is consistent
with and has the same properties (2) as the original definition (1). We end this section by giving
two simple examples, while Sections 2 and 3 will give further applications of the method.

......


This is basically saying, if we have \(g(\infty)\) has an asymptotic expansion (for us, \(g(\infty) = g^{-1}(-1)\)), and \(g(0)\) has an asymptotic expansion; we can define a meromorphic function on a larger domain than I even claimed. Additionally, in order to belong to the space they are talking about--the asymptotic expansions at worse grow like \(O(m!)\). So, if the Mellin transform converges, the asymptotic series at \(0,\infty\) must be at worst \(O(m!)\)--because otherwise, the mellin transform diverges.
Reply
#9
James is probably correct but im still a bit confused or unsatisfied.

one of the ideas is that g(-x) is the same function as g(x) for x close to 0.

Taylor expansion implies that ... but the radius is 0.

On the other hand i guess analysing g(-x) or g(x) should be the same since there is only ONE taylor expansion at 0.

then again there are two petals.

hmm

I prefer to positive side and modified laplace.

maybe im being silly or tired lol.


modified borel : f(x) = a_n x^n becomes a_n x^n / ( v^n n! )

for some v > 0. 


B is modified borel , L is modified laplace 
lets see 

LB(g(x)) = modified borel summation g(x) = 

integral from 0 to +oo 
exp(-v t) B(g(x t) ) dt

this integral converges IFF

B(g(x t)) is bounded by C exp(v t).

Now g(x t) =< exp(x t) - 1.

also for some v :

B( g(x t) ) =< g(x t) 

thus

B ( g(x t) ) =< exp(x t) - 1.

so if 

exp(x t) - 1 < C exp(v t)

then 

B(g(x t)) is bounded by C exp(v t) as desired.

In general beyond a certain size , the larger v is the smaller B( g(x t) ) becomes and the larger C exp(v t).

at the boundary we have

B(g(x t)) is asymptotic to C exp(v t).

which holds as v and x are about equal , lets call that positive number S.

so if x < S then B(g(x t)) is bounded by C exp(v t) as desired.

and that x implies a positive radius for B( g(x) ).
Also B( g(x t) ) for x > S is still smaller than exp(x t) -1.

hence 

B ( g(x t) ) =< exp(x t) - 1.

and

exp(x t) - 1 < C exp(v t) for some C EQUAL OR LARGER THAN 1.

if x < v or x = v.

now take 

v/10 < x < v

this works.

QED

someting like that.



thing is that B ( g(x t) ) might still be a divergent series at 0.

however when considered as expanded at x t > 0 then g or B(g) are not divergent anymore !

This however requires to show

B (g(A)) = B(g(Z))

in other words the expansion point of B(g) should not matter and be the same taylor... by continuation.

this is true for all expansion points larger than 0 because analytic functions have analytic borels.
but at 0 it is problematic.

this is somewhat running in circles ...

need to think

regards

tommy1729
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  The iterational paradise of fractional linear functions bo198214 7 252 08/07/2022, 04:41 PM
Last Post: bo198214
  Uniqueness of fractionally iterated functions Daniel 7 484 07/05/2022, 01:21 AM
Last Post: JmsNxn
  Fractional iteration of x^2+1 at infinity and fractional iteration of exp bo198214 17 29,664 06/11/2022, 12:24 PM
Last Post: tommy1729
  The weird connection between Elliptic Functions and The Shell-Thron region JmsNxn 1 603 04/28/2022, 12:45 PM
Last Post: MphLee
  Using a family of asymptotic tetration functions... JmsNxn 15 6,852 08/06/2021, 01:47 AM
Last Post: JmsNxn
  The AB functions ! tommy1729 0 3,552 04/04/2017, 11:00 PM
Last Post: tommy1729
  Look-alike functions. tommy1729 1 4,656 03/08/2016, 07:10 PM
Last Post: hixidom
  Inverse power tower functions tommy1729 0 3,779 01/04/2016, 12:03 PM
Last Post: tommy1729
  [2014] composition of 3 functions. tommy1729 0 3,574 08/25/2014, 12:08 AM
Last Post: tommy1729
  Intresting functions not ? tommy1729 4 10,196 03/05/2014, 06:49 PM
Last Post: razrushil



Users browsing this thread: 1 Guest(s)