Searching for an asymptotic to exp[0.5]
(05/14/2014, 05:54 AM)sheldonison Wrote:
(05/13/2014, 04:23 AM)sheldonison Wrote: \( \text{dexphalf}(x)=\frac{d}{dx} \exp^{0.5}(x) \;\; h_n = \text{dexphalf}^{-1}(n) \)

\( a_n = \exp(\exp^{0.5}(h_n) - n h_n)\;\;\; \ln(a_n) = \exp^{0.5}(h_n) - n h_n \)

I have an improvement in the equation for a_n
\( a_n = \frac{\exp(\exp^{0.5}(h_n) - n h_n)}{\sqrt{2 \pi \frac{d^2}{dx^2}\exp^{0.5}(h_n) }} \; \; a_0 = \exp^{0.5}(0) \)

This is from post#16. Start with a generic function f(x) with f(x) and f'(x) increasing, and positive for all x>0 and eventually increasing faster than any x^n. f(x) may or may not be entire. Then we can also use this approximation. Surprisingly, this approximation leads to Stirling's approximation for \( a_n \) when used for f(x)=exp(x). For a generic f(x), first we generate g(x) which changes the contour integral to a line integral between \( \pm \pi i \).

\( g(x) = \ln\( f( e^x) \) \;\;\; \text{dg}(x)=\frac{d}{dx} g(x) \;\;\; h_n = \text{dg}^{-1}(n) \)

\( a_n \; = \; \frac{1}{2\pi} \oint \frac{f(x)}{x^{n+1}}\; = \; \frac{1}{2\pi i} \int_{x-\pi i}^{x+\pi i} {\exp \( g(x) - n\cdot x \)} \)

The line integral is exact, for any value of x if f(x) is entire. We may have to cancel out some logarithmic singularities in g(x). The minimum of \( \frac{f(x)}{x^{-n}} \) occurs at \( x=e^{h_n} \). Then we have the Gaussian approximation for \( a_n \) for a generic function f(x) is as follows.

\( a_n \approx \frac{\exp(g(h_n) - n h_n)}{\sqrt{2 \pi g''(h_n) }} \;\;\; \) This is the generic approximation for \( a_n \) using \( h_n = \text{dg}^{-1}(n) \) from above

\( f(x) \approx f(0) + \sum_{n=1}^{\infty} a_n \cdot x^n\;\;\; \) You can refer back to post#16 to see the derivation of this approximation if interested. This is the best approximation I've found without using integration. If f(x) is entire, then integration allows for an exact value of \( a_n \). If f(x) is not entire, then variants on the integration technique can also be used, which leads to some of the other more accurate approximations I generated for \( exp^{0.5}(x) \) later in this thread after post#16.

So, what do you think the Gaussian approximation would be for \( a_n \) for \( f(x)=\exp(x) \)? If you go through the arithmetic, then you get Stirling's approximation!

\( g(x) = \ln \( \exp(f(x)) \) = \exp(x) \;\; g'(x)=\exp(x) \;\; g''(x)=\exp(x) \)
\( h_n = \ln(n) \)

\( \frac{1}{a_n} \; = \; n! \; \approx \; \sqrt{(2\pi n)(\frac{n}{e})^n} \;\;\; \) The Gaussian approximation \( a_n \) for exp(x) turns out to be exactly Stirling's approximation

This value of a_n does work rather well since Stirling's approximation has a known error term of \( \frac{1}{12n} \). It it does not converge quite as quickly as the Gaussian approximation for \( \exp^{0.5}(x) \), which was the original problem motivating this thread. This leads to the conjecture that in general, the Gaussian approximation for the Taylor series coefficients probably works better for functions that grow slower than exponential functions. One can conjecture that for many functions the ratio of f(x) over the Gaussian approximation for f(x) approaches arbitrarily close to 1 as x gets arbitrarily large, but its an open question as to for what class of entire functions the ratio Gaussian approximation does not converge to 1.
- Sheldon
I was mostly aware of that Sheldon , in fact that were my thoughts when first reading post 16.
But thank you for sharing your ideas and making those implicit Ideas explicit ( That's how i see it ) and thereby making the thread more complete and readable.

Speaking of post 16 , I have paid alot of attention to post 9 and the correcting factor sqrt(n), but post 16 has a different correcting factor.

But things are subtle.

In TPID 17 I added conditions on the derivatives of the original or best fake.
One of the reasons is partially related to post 16.

In post 16 the fake derivatives depend on f '' alot.

But if f ' ' ' and f ' ' ' ' can be negative and a few other unfortunate properties then the second derivative might not always behave as you want.


Im not going into details here.

However we can combine TPID17 and post 16 and thereby arrive at a Nice conjecture.

When this conjecture is stated in terms of contour integral we get

Conjecture B

I talked about Conjecture B with mick and I advised him to put it on MSE.
More precisely edditing his recent one with adding Conjecture B.

By the time I finish this post it should be there.

So you can read Conjecture B on MSE

http://math.stackexchange.com/questions/...r-dx-n-n-0




---

Am I the only one who wonders what happens if we add the positivity condition f ' ' ' (x) > 0 and then see if we can improve post 16 estimate with a f ' ' ' term !?

Although Sheldon argues that that term has little influence and for exp(x) it has NO influence.

But still.

Regards

Tommy1729
(09/21/2015, 10:53 AM)tommy1729 Wrote: ....
In post 16 the fake derivatives depend on f '' alot.

But if f ' ' ' and f ' ' ' ' can be negative and a few other unfortunate properties then the second derivative might not always behave as you want.
... see if we can improve post 16 estimate with a f ' ' ' term !?

Although Sheldon argues that that term has little influence and for exp(x) it has NO influence.

Thanks for the comments Tommy. g''' is probably the error term; I don't know how the error term behaves, or how to approximate the error terms. For the exp "fake function", Stirling's approximation has a multiplicative error term of (1+1/12n), which is a fairly large slowly converging error term. I could try some numeric integrals to see how much the error comes down if the x^3 term is included.
\( \int_{-\infty i}^{\infty i} \exp(\frac{g''}{2}x^2)\;\; \) Gaussian approximation without the x^3 term
\( \int_{-\pi i}^{\pi i} \exp(\frac{g''}{2}x^2+\frac{g'''}{6}x^3)\;\; \) integral approximation with the x^3 term. For f(x)=exp(x), g'''=g''

There are other interesting cases. If f(x) is an even function with all odd derivatives=0, that is fine with me; even though this is different than Tommy's tpid#17. The interpolating fake(x) function will have all positive derivatives, filling in the "zero" derivatives to smooth out the function. Perhaps Tommy's suggestion is correct and fake functions work best when g'''(x)>=0? Also, what should be done for the Gaussian approximation when g''(x) is arbitrarily small??

\( a_n \approx \frac{\exp(g(h_n) - n h_n)}{\sqrt{2 \pi g''(h_n) }}\;\;\; \) if \( g''(h_n)<\frac{1}{2\pi} \), then the Gaussian approximation misbehaves; \( a_n \approx \exp(g(h_n) - n h_n) \)
- Sheldon
Min (f(x) / x^n) = 1/n!

My intuition suggests f(x) ~ exp(x) sqrt(x+1) / sqrt(2 pi).

Or in other notation S9^[-1] (exp(x)) ~ O( exp(x) sqrt(x) ).

More general S9^[r] (exp(x)) ~ O( C^r exp(x) x^(-r/2) ).

Also wondering about lim Gauss^[+oo](any(x)) = ?? If the limit even exists !?

I could not help noticing the resemblance to the semi-derivative of exp and S9(exp(x)). Coincidence ? Or does the semi-derivative play a role in fake function theory ?

Does for Large n,r :
a_n® a_n(-r) ~ gaussian a_n ?
( when the gaussian is good )

Regards

Tommy1729
To solve the problem I always used a kind of fake second derivative.

So g''(h_n) becomes min[g(h_n + x)/(h_n + x)^2].

I also consider iterations Done infinitely often.

I call this method 25.
Method 9 = S9 = post 9 no rescaling.
Method 16 = gaussian.

9,16,25 makes sence Smile

Also that's An old idea occuring to me around post 25.

Iterating method 25 seems interesting.
Stating that in terms of contour integrals that are practical to compute is another thing ! But im thinking about " conjecture C " ...

Regards

Tommy1729
\( g(x) = \ln\( f( e^x) \) \;\;\; h_n = \left(g'\right)^{-1}(n) \)

\( a_n \; = \; \frac{1}{2\pi} \oint \frac{f(x)}{x^{n+1}}
\; = \; \frac{1}{2\pi i} \int_{x-\pi i}^{x+\pi i} {\exp \( g(x) - n\cdot x \)} \)

\( a_n \approx \frac{\exp(g(h_n) - n h_n)}{\sqrt{2 \pi g''(h_n) }} \;\;\; h_n = \left(g'\right)^{-1}(n)\;\;\; \) The Gaussian approximation for the Taylor series of f(x)

Now consider another better approximation where m is the minimum of \( \Re \(g(h_n + ix)\)\;\; \), which might be less than \( \pm \pi i \); or in the case of \( f(x)=\exp^{0.5}(x)\; \) m is actually greater than \( \pm \pi i \)
\( a_n \approx \frac{1}{2\pi i} \int_{h_n-mi}^{h_n+mi} {\exp \( g(h_n +x) - n\cdot x \)}\;\;\; a_0 = f(0)\;\; \) This is pretty much version V of the half iterate approximation first mentioned in post#31

\( f2(z) = \sum_{n=0}^{\infty} a_n z^n \)

Since this approximation actually involves an integral, it should be much more accurate than the Gaussian approximation. It works for generic analytic functions, not just entire functions. And, if f(z) is entire, but maybe f(z) is an even or an odd function, than f2(z) is an approximation with all positive derivatives. For example, take \( f(x)=e^x + e^{-x} \), than I think \( f2(x)=e^x+1\; \), or at least something very close to that.

Lets assume f is entire, with g'(x) an increasing function for x>0 at the real axis that gets arbitrarily large, then what is the ratio of \( \frac{f2(x)}{f(x)}\; \) as x gets arbitrarily large? How well does it converge to 1?
How about \( \|f2(x)-f(x)\| \) as \( \Re(x) \) gets arbitrarily large? The difference could be dominated by the error terms in the smaller derivatives.
- Sheldon
I considered replacing \( \int exp{f ''(0) x^2} dx \) with \( \int exp {f ' ' (x) x^2}dx \) for method III Not sure.

Regards

Tommy1729

I think exp^[3] might fail tpid 17.

But i guess i need asymptotics for the inverse and derivatives of

d exp^[3].

Like Lambert and Bell.

Regards

Tommy1729
I considered exp(exp(x)) by studying the Bell Numbers and Lambert W.
If my calculations are correct, the fake coëfficiënts and the derivatives match very well.

In fact the correcting factor is sqrt( 2 pi n ) (1 + log(n)^C).

I have yet to determine C but it Will be close to -4 < C < 4.

Very similar like with exp(x) !

And like exp^[1/2].

The difficulty with these computations is having good enough estimates for the analogues of Bell and Lambert W.

Fake function ideas seem to show these estimates relate , without computing the estimates first !

I noted that the gaussian method gives a slightly different result. More specific it adds a log factor.

The value of C Will determine if the gaussian beats S9, but my bet is it does.

This also explains the ln part of tpid 17.
Let me explain :

Notice that any function f bounded by exp^[A] above and exp^[B] below , Will satisfy on average

f ' ' (x) / f(x) < O ( ln(f(x)) )^(2+ eps)

Hence we get the ln part.

In fact by this argument we need

O ( sqrt(n) ln(n) ln^2(n) ln^3(n) ... )

As error in tpid 17.
And add the bounds too.

Regards

Tommy1729
It turns out the correcting factor is exactly sqrt( 2 pi n ) just like for exp.

So the gaussian is a powerfull idea.

In the past I expressed doubt due to the existance of functions with a more complicated Riemann surface. In particular because of the contour.

Neither that doubt nor the dismissal has been considered enough , hence they need more study.

No lack of work to do in fake function theory.

Im not An expert in deepest descent methods but this might be intresting here.

I have many more ideas but I am most confident in this one :

Tommy-Sheldon iterations

( first order )

---

The dot product ( • ) for Taylor series :

Z(x) = z_0 + z_1 x + ...

Z(x) • u(n) = z_0 u_0 + z_1 u_1 x + ...

---

Given our valid function f(x) of wich we want a fake

F_0(x) = f(x)

G_0(x) = ln( f(exp(x)) )

G_k(x) = ln( F_k(exp(x)) )

Now use the min(f / x^n) method from post 9 = S9.
No rescaling.

F_1(x) = S9(F_0(x))

F_2(x) = F_1(x) • [ sqrt( 2 pi G_1 '' (h_n) ) ]^ -1

F_3(x) = F_1(x) • [ sqrt( 2 pi G_2 '' (h_n) ) ]^ -1

F_4(x) = F_1(x) • [ sqrt( 2 pi G_3 '' (h_n) ) ]^ -1

...

F_oo(x) = ts( f(x) ) = tsf(0) + tsf(1) x + ...


I believe if €f(x) = the best fake for f with coëfficiënts

€(n) then ts1(n)/€(n) =< (1 + O(1/n)).

As for higher orders those are LIKELY both convergeance accelerators of F_n(x) AND
Give higher precision [ 1 + O(1/n^2) i guess ] , also probably by adding higher derivatives.

Notice the Tommy-Sheldon iterations do not require f ''' (y) > 0 for all y > 0.

I assumed it is not possible to increase convergence speed without precision or complexity ( higher derivatives ).

This recursion reminds me of numerical methods used for differential equations.

Its weird , how this nonstandard idea connects to classical ideas.
But I guess we are used to that on the tetration forum.

As far as I know this is the best method.
Sheldon latest methods IV,V,... Are I assume only good/valid for tetration type functions , not for general functions.

Notice the Tommy-Sheldon iterations solve the issue of " too small f '' (h_n) ".

I hope you guys do not Mind me ignoring this Roman numerals hype here.

Regards

Tommy1729


Possibly Related Threads…
Thread Author Replies Views Last Post
Question Tetration Asymptotic Series Catullus 18 6,417 07/05/2022, 01:29 AM
Last Post: JmsNxn
  Using a family of asymptotic tetration functions... JmsNxn 15 10,895 08/06/2021, 01:47 AM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 2,608 07/22/2021, 03:37 AM
Last Post: JmsNxn
  A Holomorphic Function Asymptotic to Tetration JmsNxn 2 2,956 03/24/2021, 09:58 PM
Last Post: JmsNxn
  An asymptotic expansion for \phi JmsNxn 1 2,353 02/08/2021, 12:25 AM
Last Post: JmsNxn
  Merged fixpoints of 2 iterates ? Asymptotic ? [2019] tommy1729 1 5,082 09/10/2019, 11:28 AM
Last Post: sheldonison
  Another asymptotic development, similar to 2sinh method JmsNxn 0 5,017 07/05/2011, 06:34 PM
Last Post: JmsNxn



Users browsing this thread: 2 Guest(s)