Thread Rating:
  • 3 Vote(s) - 4.33 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Searching for an asymptotic to exp[0.5]
(05/14/2014, 05:54 AM)sheldonison Wrote:
(05/13/2014, 04:23 AM)sheldonison Wrote:

I have an improvement in the equation for a_n

This is from post#16. Start with a generic function f(x) with f(x) and f'(x) increasing, and positive for all x>0 and eventually increasing faster than any x^n. f(x) may or may not be entire. Then we can also use this approximation. Surprisingly, this approximation leads to Stirling's approximation for when used for f(x)=exp(x). For a generic f(x), first we generate g(x) which changes the contour integral to a line integral between .

The line integral is exact, for any value of x if f(x) is entire. We may have to cancel out some logarithmic singularities in g(x). The minimum of occurs at . Then we have the Gaussian approximation for for a generic function f(x) is as follows.

This is the generic approximation for using from above

You can refer back to post#16 to see the derivation of this approximation if interested. This is the best approximation I've found without using integration. If f(x) is entire, then integration allows for an exact value of . If f(x) is not entire, then variants on the integration technique can also be used, which leads to some of the other more accurate approximations I generated for later in this thread after post#16.

So, what do you think the Gaussian approximation would be for for ? If you go through the arithmetic, then you get Stirling's approximation!

The Gaussian approximation for exp(x) turns out to be exactly Stirling's approximation

This value of a_n does work rather well since Stirling's approximation has a known error term of . It it does not converge quite as quickly as the Gaussian approximation for , which was the original problem motivating this thread. This leads to the conjecture that in general, the Gaussian approximation for the Taylor series coefficients probably works better for functions that grow slower than exponential functions. One can conjecture that for many functions the ratio of f(x) over the Gaussian approximation for f(x) approaches arbitrarily close to 1 as x gets arbitrarily large, but its an open question as to for what class of entire functions the ratio Gaussian approximation does not converge to 1.
- Sheldon
I was mostly aware of that Sheldon , in fact that were my thoughts when first reading post 16.
But thank you for sharing your ideas and making those implicit Ideas explicit ( That's how i see it ) and thereby making the thread more complete and readable.

Speaking of post 16 , I have paid alot of attention to post 9 and the correcting factor sqrt(n), but post 16 has a different correcting factor.

But things are subtle.

In TPID 17 I added conditions on the derivatives of the original or best fake.
One of the reasons is partially related to post 16.

In post 16 the fake derivatives depend on f '' alot.

But if f ' ' ' and f ' ' ' ' can be negative and a few other unfortunate properties then the second derivative might not always behave as you want.

Im not going into details here.

However we can combine TPID17 and post 16 and thereby arrive at a Nice conjecture.

When this conjecture is stated in terms of contour integral we get

Conjecture B

I talked about Conjecture B with mick and I advised him to put it on MSE.
More precisely edditing his recent one with adding Conjecture B.

By the time I finish this post it should be there.

So you can read Conjecture B on MSE


Am I the only one who wonders what happens if we add the positivity condition f ' ' ' (x) > 0 and then see if we can improve post 16 estimate with a f ' ' ' term !?

Although Sheldon argues that that term has little influence and for exp(x) it has NO influence.

But still.


(09/21/2015, 10:53 AM)tommy1729 Wrote: ....
In post 16 the fake derivatives depend on f '' alot.

But if f ' ' ' and f ' ' ' ' can be negative and a few other unfortunate properties then the second derivative might not always behave as you want.
... see if we can improve post 16 estimate with a f ' ' ' term !?

Although Sheldon argues that that term has little influence and for exp(x) it has NO influence.

Thanks for the comments Tommy. g''' is probably the error term; I don't know how the error term behaves, or how to approximate the error terms. For the exp "fake function", Stirling's approximation has a multiplicative error term of (1+1/12n), which is a fairly large slowly converging error term. I could try some numeric integrals to see how much the error comes down if the x^3 term is included.
Gaussian approximation without the x^3 term
integral approximation with the x^3 term. For f(x)=exp(x), g'''=g''

There are other interesting cases. If f(x) is an even function with all odd derivatives=0, that is fine with me; even though this is different than Tommy's tpid#17. The interpolating fake(x) function will have all positive derivatives, filling in the "zero" derivatives to smooth out the function. Perhaps Tommy's suggestion is correct and fake functions work best when g'''(x)>=0? Also, what should be done for the Gaussian approximation when g''(x) is arbitrarily small??

if , then the Gaussian approximation misbehaves;
- Sheldon
Min (f(x) / x^n) = 1/n!

My intuition suggests f(x) ~ exp(x) sqrt(x+1) / sqrt(2 pi).

Or in other notation S9^[-1] (exp(x)) ~ O( exp(x) sqrt(x) ).

More general S9^[r] (exp(x)) ~ O( C^r exp(x) x^(-r/2) ).

Also wondering about lim Gauss^[+oo](any(x)) = ?? If the limit even exists !?

I could not help noticing the resemblance to the semi-derivative of exp and S9(exp(x)). Coincidence ? Or does the semi-derivative play a role in fake function theory ?

Does for Large n,r :
a_n® a_n(-r) ~ gaussian a_n ?
( when the gaussian is good )


To solve the problem I always used a kind of fake second derivative.

So g''(h_n) becomes min[g(h_n + x)/(h_n + x)^2].

I also consider iterations Done infinitely often.

I call this method 25.
Method 9 = S9 = post 9 no rescaling.
Method 16 = gaussian.

9,16,25 makes sence Smile

Also that's An old idea occuring to me around post 25.

Iterating method 25 seems interesting.
Stating that in terms of contour integrals that are practical to compute is another thing ! But im thinking about " conjecture C " ...



The Gaussian approximation for the Taylor series of f(x)

Now consider another better approximation where m is the minimum of , which might be less than ; or in the case of m is actually greater than
This is pretty much version V of the half iterate approximation first mentioned in post#31

Since this approximation actually involves an integral, it should be much more accurate than the Gaussian approximation. It works for generic analytic functions, not just entire functions. And, if f(z) is entire, but maybe f(z) is an even or an odd function, than f2(z) is an approximation with all positive derivatives. For example, take , than I think , or at least something very close to that.

Lets assume f is entire, with g'(x) an increasing function for x>0 at the real axis that gets arbitrarily large, then what is the ratio of as x gets arbitrarily large? How well does it converge to 1?
How about as gets arbitrarily large? The difference could be dominated by the error terms in the smaller derivatives.
- Sheldon
I considered replacing with for method III Not sure.



I think exp^[3] might fail tpid 17.

But i guess i need asymptotics for the inverse and derivatives of

d exp^[3].

Like Lambert and Bell.


I considered exp(exp(x)) by studying the Bell Numbers and Lambert W.
If my calculations are correct, the fake coëfficiënts and the derivatives match very well.

In fact the correcting factor is sqrt( 2 pi n ) (1 + log(n)^C).

I have yet to determine C but it Will be close to -4 < C < 4.

Very similar like with exp(x) !

And like exp^[1/2].

The difficulty with these computations is having good enough estimates for the analogues of Bell and Lambert W.

Fake function ideas seem to show these estimates relate , without computing the estimates first !

I noted that the gaussian method gives a slightly different result. More specific it adds a log factor.

The value of C Will determine if the gaussian beats S9, but my bet is it does.

This also explains the ln part of tpid 17.
Let me explain :

Notice that any function f bounded by exp^[A] above and exp^[B] below , Will satisfy on average

f ' ' (x) / f(x) < O ( ln(f(x)) )^(2+ eps)

Hence we get the ln part.

In fact by this argument we need

O ( sqrt(n) ln(n) ln^2(n) ln^3(n) ... )

As error in tpid 17.
And add the bounds too.


It turns out the correcting factor is exactly sqrt( 2 pi n ) just like for exp.

So the gaussian is a powerfull idea.

In the past I expressed doubt due to the existance of functions with a more complicated Riemann surface. In particular because of the contour.

Neither that doubt nor the dismissal has been considered enough , hence they need more study.

No lack of work to do in fake function theory.

Im not An expert in deepest descent methods but this might be intresting here.

I have many more ideas but I am most confident in this one :

Tommy-Sheldon iterations

( first order )


The dot product ( • ) for Taylor series :

Z(x) = z_0 + z_1 x + ...

Z(x) • u(n) = z_0 u_0 + z_1 u_1 x + ...


Given our valid function f(x) of wich we want a fake

F_0(x) = f(x)

G_0(x) = ln( f(exp(x)) )

G_k(x) = ln( F_k(exp(x)) )

Now use the min(f / x^n) method from post 9 = S9.
No rescaling.

F_1(x) = S9(F_0(x))

F_2(x) = F_1(x) • [ sqrt( 2 pi G_1 '' (h_n) ) ]^ -1

F_3(x) = F_1(x) • [ sqrt( 2 pi G_2 '' (h_n) ) ]^ -1

F_4(x) = F_1(x) • [ sqrt( 2 pi G_3 '' (h_n) ) ]^ -1


F_oo(x) = ts( f(x) ) = tsf(0) + tsf(1) x + ...

I believe if €f(x) = the best fake for f with coëfficiënts

€(n) then ts1(n)/€(n) =< (1 + O(1/n)).

As for higher orders those are LIKELY both convergeance accelerators of F_n(x) AND
Give higher precision [ 1 + O(1/n^2) i guess ] , also probably by adding higher derivatives.

Notice the Tommy-Sheldon iterations do not require f ''' (y) > 0 for all y > 0.

I assumed it is not possible to increase convergence speed without precision or complexity ( higher derivatives ).

This recursion reminds me of numerical methods used for differential equations.

Its weird , how this nonstandard idea connects to classical ideas.
But I guess we are used to that on the tetration forum.

As far as I know this is the best method.
Sheldon latest methods IV,V,... Are I assume only good/valid for tetration type functions , not for general functions.

Notice the Tommy-Sheldon iterations solve the issue of " too small f '' (h_n) ".

I hope you guys do not Mind me ignoring this Roman numerals hype here.



Possibly Related Threads...
Thread Author Replies Views Last Post
  Using a family of asymptotic tetration functions... JmsNxn 15 4,590 08/06/2021, 01:47 AM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 881 07/22/2021, 03:37 AM
Last Post: JmsNxn
  A Holomorphic Function Asymptotic to Tetration JmsNxn 2 1,327 03/24/2021, 09:58 PM
Last Post: JmsNxn
  An asymptotic expansion for \phi JmsNxn 1 1,149 02/08/2021, 12:25 AM
Last Post: JmsNxn
  Merged fixpoints of 2 iterates ? Asymptotic ? [2019] tommy1729 1 3,546 09/10/2019, 11:28 AM
Last Post: sheldonison
  Another asymptotic development, similar to 2sinh method JmsNxn 0 4,091 07/05/2011, 06:34 PM
Last Post: JmsNxn

Users browsing this thread: 4 Guest(s)