(10/09/2015, 08:15 AM)sheldonison Wrote:(10/08/2015, 11:08 PM)tommy1729 Wrote: I edited post 150 , where I mentioned the tommy-sheldon iterations.

Typo's , mistakes and confusion should be gone.

Although the convergeance conjecture disagrees with sheldon's recent not 1 ratio ... Maybe ...

Things should be clear now.

Regards

Tommy1729

Start with

g''(x) for f(x) is conjectured to approach exactly g''(x) for J(x), as x gets arbitrarily large, which is the initial reason for choosing this particular f(x). But f(x) is interesting on its own.

I generate the fake function for f(x) using the Gaussian method. I assume that this is what Tommy means in post#150 by F_2(x).

> F_2(x) = F_1(x) • sqrt( 2 pi G_1 '' (h_n) )

where for f(x) optionally

Is this the same as Tommy's F_2? Tommy has the g'' in the numerator which is a typo. It looks like Tommy's F_3 would be the fake function for F_2(x)? I'm not sure if that's what Tommy intended or not.

f2(x) is the fake Gaussian approximation for f(x)

Now, starting with let's generate

From there, we can generate a new set of values, which should be nearly identical to the original set of h_n values, and a new set of values which should be nearly identical to the values.

I did this numerically. I think I might be able to generate a closed form equation for this new ratio result, using the equations from post#85. I guess this ratio not going to a limiting value of 1 is a contradiction for your conjecture. Its actually rather interesting, especially if you consider the ratio for non integer values of a_n; n=20.5; vs the equation above. I can post more later if interested; it turns out we have a sine wave oscillating around Anyway, assuming g''(x) for f(x) is asymptotically the same as g''(x) for J(x) as x gets arbitrarily large, I expect the limiting ratio for J(x) to be the same as the limiting ratio below, as n gets arbitrarily large.

Code:`ratio of b_n over a_n where f2(x) is the function from above`

1 1.13160761703913345046

2 1.03756115378093045262

3 1.00584054797835817399

4 1.00042570875240853058

5 1.00001412678446418263

6 1.00000021835990293497

7 1.00000000163026326669

8 1.00000000003096900943

9 1.00000000002529304393

10 1.00000000002528326249

11 1.00000000002528325425

12 1.00000000002528325425

13 1.00000000002528325425

14 1.00000000002528325425

15 1.00000000002528325425

16 1.00000000002528325425

17 1.00000000002528325425

18 1.00000000002528325425

19 1.00000000002528325425

20 1.00000000002528325425

21 1.00000000002528325425

22 1.00000000002528325425

23 1.00000000002528325425

24 1.00000000002528325425

25 1.00000000002528325425

26 1.00000000002528325425

27 1.00000000002528325425

28 1.00000000002528325425

29 1.00000000002528325425

30 1.00000000002528325425

Wait.

If a_n ~~ exp(g(h_n) - n h_n)

Is An underestimate ( as you say )

And the gaussian is beter then

Gaussian > exp( g(h_n) - n h_n).

Right ?

So gaussian can not be exp ... / sqrt( 2 pi g" (h_n)).

It has to be exp( g(h_n) - n h_n) * sqrt( 2 pi g " (h_n) ).

Or am I crazy today ?

Regards

Tommy1729