It turns out the correcting factor is exactly sqrt( 2 pi n ) just like for exp.

So the gaussian is a powerfull idea.

In the past I expressed doubt due to the existance of functions with a more complicated Riemann surface. In particular because of the contour.

Neither that doubt nor the dismissal has been considered enough , hence they need more study.

No lack of work to do in fake function theory.

Im not An expert in deepest descent methods but this might be intresting here.

I have many more ideas but I am most confident in this one :

Tommy-Sheldon iterations

( first order )

---

The dot product ( • ) for Taylor series :

Z(x) = z_0 + z_1 x + ...

Z(x) • u(n) = z_0 u_0 + z_1 u_1 x + ...

---

Given our valid function f(x) of wich we want a fake

F_0(x) = f(x)

G_0(x) = ln( f(exp(x)) )

G_k(x) = ln( F_k(exp(x)) )

Now use the min(f / x^n) method from post 9 = S9.

No rescaling.

F_1(x) = S9(F_0(x))

F_2(x) = F_1(x) • [ sqrt( 2 pi G_1 '' (h_n) ) ]^ -1

F_3(x) = F_1(x) • [ sqrt( 2 pi G_2 '' (h_n) ) ]^ -1

F_4(x) = F_1(x) • [ sqrt( 2 pi G_3 '' (h_n) ) ]^ -1

...

F_oo(x) = ts( f(x) ) = tsf(0) + tsf(1) x + ...

I believe if €f(x) = the best fake for f with coëfficiënts

€(n) then ts1(n)/€(n) =< (1 + O(1/n)).

As for higher orders those are LIKELY both convergeance accelerators of F_n(x) AND

Give higher precision [ 1 + O(1/n^2) i guess ] , also probably by adding higher derivatives.

Notice the Tommy-Sheldon iterations do not require f ''' (y) > 0 for all y > 0.

I assumed it is not possible to increase convergence speed without precision or complexity ( higher derivatives ).

This recursion reminds me of numerical methods used for differential equations.

Its weird , how this nonstandard idea connects to classical ideas.

But I guess we are used to that on the tetration forum.

As far as I know this is the best method.

Sheldon latest methods IV,V,... Are I assume only good/valid for tetration type functions , not for general functions.

Notice the Tommy-Sheldon iterations solve the issue of " too small f '' (h_n) ".

I hope you guys do not Mind me ignoring this Roman numerals hype here.

Regards

Tommy1729

So the gaussian is a powerfull idea.

In the past I expressed doubt due to the existance of functions with a more complicated Riemann surface. In particular because of the contour.

Neither that doubt nor the dismissal has been considered enough , hence they need more study.

No lack of work to do in fake function theory.

Im not An expert in deepest descent methods but this might be intresting here.

I have many more ideas but I am most confident in this one :

Tommy-Sheldon iterations

( first order )

---

The dot product ( • ) for Taylor series :

Z(x) = z_0 + z_1 x + ...

Z(x) • u(n) = z_0 u_0 + z_1 u_1 x + ...

---

Given our valid function f(x) of wich we want a fake

F_0(x) = f(x)

G_0(x) = ln( f(exp(x)) )

G_k(x) = ln( F_k(exp(x)) )

Now use the min(f / x^n) method from post 9 = S9.

No rescaling.

F_1(x) = S9(F_0(x))

F_2(x) = F_1(x) • [ sqrt( 2 pi G_1 '' (h_n) ) ]^ -1

F_3(x) = F_1(x) • [ sqrt( 2 pi G_2 '' (h_n) ) ]^ -1

F_4(x) = F_1(x) • [ sqrt( 2 pi G_3 '' (h_n) ) ]^ -1

...

F_oo(x) = ts( f(x) ) = tsf(0) + tsf(1) x + ...

I believe if €f(x) = the best fake for f with coëfficiënts

€(n) then ts1(n)/€(n) =< (1 + O(1/n)).

As for higher orders those are LIKELY both convergeance accelerators of F_n(x) AND

Give higher precision [ 1 + O(1/n^2) i guess ] , also probably by adding higher derivatives.

Notice the Tommy-Sheldon iterations do not require f ''' (y) > 0 for all y > 0.

I assumed it is not possible to increase convergence speed without precision or complexity ( higher derivatives ).

This recursion reminds me of numerical methods used for differential equations.

Its weird , how this nonstandard idea connects to classical ideas.

But I guess we are used to that on the tetration forum.

As far as I know this is the best method.

Sheldon latest methods IV,V,... Are I assume only good/valid for tetration type functions , not for general functions.

Notice the Tommy-Sheldon iterations solve the issue of " too small f '' (h_n) ".

I hope you guys do not Mind me ignoring this Roman numerals hype here.

Regards

Tommy1729