Thread Rating:
  • 2 Vote(s) - 4 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Searching for an asymptotic to exp[0.5]
(09/29/2015, 12:28 PM)tommy1729 Wrote: I think exp^[3] might fail tpid 17.

But i guess i need asymptotics for the inverse and derivatives ...
(09/30/2015, 12:25 PM)tommy1729 Wrote: I considered exp(exp(x)) ... If my calculations are correct, the fake coëfficiënts and the derivatives match very well.

In fact the correcting factor is sqrt( 2 pi n ) (1 + log(n)^C).

The Gaussian approximation for a_n convergence is slower than for exp(x). I printed the coefficients and the ratio over the correct coefficients for up to n=100000. Notice the ratio for n=100000 is about 1/100000 vs for exp(x) it would be 1/(12*100000); convergence is slower, but still pretty good. All of the Gaussian approximations are over-approximations of the correct values of a_n; since the g''' reduces the exact integral as compared with the Gaussian approximation. Though one can imagine that perhaps for some functions the Gaussian approximation could be an under-approximation, due to g'''', and other higher even derivatives

For the highest derivative I calculated, , so the correcting factor for is approximately

Code:
n     ratio fake(a_n)/a_n  fake(a_n)               integration(a_n)  formal power series(a_n)
1      1.15723940985387  47.6708060587910          41.1935556747161  41.1935556747161
2      1.09160388348818  106.083596475327          97.1814025948145  97.1814025948145
3      1.06898921269797  210.627119908936          197.033905868277  197.033905868277
4      1.05666212689989  382.207817995256          361.712422793656  361.712422793656
5      1.04867080297229  646.826803671434          616.806343647696  616.806343647696
6      1.04298401653642  1035.13141987781          992.471028765439  992.471028765439
7      1.03868925837461  1581.79911314889          1522.88001478340  1522.88001478340
8      1.03530915542802  2324.59752888503          2245.31727233106  2245.31727233106
9      1.03256648485568  3303.12956524844          3198.95097671131  3198.95097671131
10     1.03028799721656  4557.31116284059          4423.33713986059  4423.33713986059
11     1.02835933568202  6125.65311242020          5956.72436654410  5956.72436654410
12     1.02670169498546  8043.43440974102          7834.24674277465  7834.24674277465
13     1.02525883461920  10340.8641944161          10086.1010363855  10086.1010363855
14     1.02398943694716  13041.3319542067          12735.8071125100  12735.8071125100
15     1.02286239855711  16159.8416414750          15798.6466843152  15798.6466843152
16     1.02185381394878  19701.7152521193          19280.3657266644  19280.3657266644
17     1.02094497801528  23661.6362600396          23176.2110295482      
18     1.02012102477752  28023.0843635677          27470.3527159234      
19     1.01936997635353  32758.1917261114          32135.7235213982      
20     1.01868206390559  37828.0287629513          37134.2837017470      
21     1.01804923336751  43183.3059512592          42417.6990030404      
22     1.01746477945636  48765.4583716498          47928.3994456354      
23     1.01692224628397  54508.0627487729          53600.9697597178      
24     1.01643597047673  60338.5233957121          59363.8089204820      
25     1.01596323715813  66179.9541423454          65140.9868320276      
26     1.01551188867641  71953.1782153540          70854.2209831832      
27     1.01509167840415  77578.7670479892          76424.8947662030      
28     1.01470415091722  82979.0418156307          81776.0418053547      
29     1.01434378855430  88079.9676237661          86834.2266019472      
30     1.01400369621485  92812.8790957969          91531.2604520085      
31     1.01367956763000  97115.9869213079          95805.7022429442      
32     1.01337016704666  100935.627007882          99604.1056694341      
33     1.01307576287902  104227.226526516          102881.986926900      
34     1.01279649575461  106955.973702653          105604.499396962      
35     1.01253168208835  109097.190115455          107746.813668894      
36     1.01227998862538  110636.415062925          109294.211980315      
37     1.01203994540756  111569.220878326          110241.915456957      
38     1.01181034360760  111900.785707435          110594.670146729      
39     1.01159035416808  111645.256050902          110366.123649459      
40     1.01137943310853  110824.935317003          109578.028113050      
41     1.01117715812270  109469.336780894          108259.307559422      
42     1.01098310615209  107614.139842365          106445.028051436      
43     1.01079680921432  105300.087504960          104175.308301416      
44     1.01061777085196  102571.860792664          101494.206177202      
45     1.01044550585280  99476.9626243454          98448.6134283820      
46     1.01027957304367  96064.6397358890          95087.1870859465      
47     1.01011958833353  92384.8668237719          91459.3416295928      
48     1.00996521991299  88487.4124140736          87614.3213990070      
49     1.00981617441352  84421.0012454865          83600.3680555582      
50     1.00967218272330  80232.5833724650          79463.9933569487      
100    1.00578453692724  187.156981791671          186.080591737324      
200    1.00341299501394  1.74510894750342 E-8      1.73917315818616 E-8  
300    1.00249185889752  1.16810780755257 E-21     1.16520428289282 E-21
400    1.00198840302522  1.11337311130476 E-36     1.11116367010162 E-36
500    1.00166677366322  5.49504600939546 E-53     5.48590225200280 E-53
600    1.00144175611352  2.86723734773177 E-70     2.86310944218981 E-70
700    1.00127462579363  2.47946198504931 E-88     2.47630562203056 E-88
800    1.00114510607748  4.83232538668920 E-107    4.82679819074619 E-107
900    1.00104149537643  2.65156196967975 E-126    2.64880325333832 E-126
1000   1.00095654201347  4.84563528819201 E-146    4.84100466384362 E-146
2000   1.00054343575353  4.44050028010782 E-361    4.43808846415907 E-361
3000   1.00038874082092  1.26848456971695 E-595    1.26799164960216 E-595
4000   1.00030596222398  7.79342676222892 E-842    7.79104299738634 E-842
5000   1.00025385632356  2.84197593479513 E-1096   2.84125466433174 E-1096
6000   1.00021781130899  5.16340561932230 E-1357   5.16228121609325 E-1357
7000   1.00019128096097  7.80231979579972 E-1623   7.80082764599120 E-1623
8000   1.00017087568485  6.81651188733738 E-1893   6.81534731019828 E-1893
9000   1.00015465721180  1.41352928698151 E-2166   1.41331070828795 E-2166
10000  1.00014143364941  2.03221558548151 E-2443   2.03192820246049 E-2443
11000  1.00013043013483  4.69049191953417 E-2723   4.68988021782504 E-2723
12000  1.00012112023776  3.41159235878872 E-3005   3.41117919595346 E-3005
13000  1.00011313328559  1.35925315762782 E-3289   1.35909939824744 E-3289
14000  1.00010620042585  4.70370063713979 E-3576   4.70320115517424 E-3576
15000  1.00010012172476  2.08773817808774 E-3864   2.08752917106664 E-3864
16000  1.00009474529279  1.65935538247062 E-4154   1.65919818125315 E-4154
17000  1.00008995359053  3.15216667696599 E-4446   3.15188315375939 E-4446
18000  1.00008565418274  1.84141780147091 E-4739   1.84126008984270 E-4739
19000  1.00008177333722  4.12970846278349 E-5034   4.12937079035335 E-5034
20000  1.00007825149451  4.32839677072003 E-5330   4.32805809370588 E-5330
40000  1.00004302510030  2.00939366799719 E-11449  2.00930721735263 E-11449
60000  1.00003024269340  1.05575695570235 E-17808  1.05572502773402 E-17808
80000  1.00002352422901  1.90343379776152 E-24314  1.90338902200228 E-24314
100000 1.00001934754707  8.30238861997517 E-30925  8.30222799222832 E-30925
- Sheldon
Reply
In post 150 I wrote the formula for the Tommy-Sheldon iterations.

I edited that with the dot product, to avoid confusion.
It is currently the most accurate general method without An integral.

Since edits are not marked on this forum I needed to say this.

The Tommy-Sheldon iterations might be associated with Some superfunction.

The method together with series reversion gives a simple algorithm to make a fake; simple enough for a basic program such as excel or C+ dos type.

----

But the main reason I make a post is to ask

When - for what f(x) - does fake(a_n)/a_n converge to its limit ( 1 ? ) quadratically ??

Where fake means the gaussian fake or the Tommy-Sheldon iterations.

Originally I thought this could be easily answered by a type of contour integral ...

Regards

Tommy1729
Reply
In principle we can test how good our fake approximates the original for large x.

Lets assume our convergeance is slower then exp^[m] for positive integer m.
Fake[ exp^[m](x) ( f(x) - fake(f(x)) ] exp(- exp^[m-1](x)).

So now we can use this to get a better asymptotic.

---

Another idea is to consider

F(x) = Sum a(n) x^n

A(n) = Sum b(n) x^n (1)

Now compute the fake a(n) from (1).

Call them a&(n).

Now consider fake f(x) versus Sum a&(n) x^n.

Regards

Tommy1729
Reply
If the conditions for the derivatives are not met we can still use fake function theory to find entire asymptotics.

For instance like the previous post.

Or like considered before asymptotic to sqrt(x)

Given by

Fake( exp(x) sqrt(x) ) exp(-x).

Maybe I mentioned this before but I think we should consider

Fake( exp(x) + sqrt(x) ) - sqrt(x).

And in general

Fake(a * b) / a versus Fake(a + b) - a.

Regards

Tommy1729
Reply
Fake(a + b) - a seems to be always better then fake ( a b ) / a.
I can prove this for most functions.

Regards

Tommy1729
Reply
Since the beginning of this thread i noticed a problem that has not been adressed.

The problem occurs with all methods.

I mentioned it but I guess it was not clear enough.
Maybe because I anticipated the problem before we had the actual methods.

So now that we have a dozen methods or so , it is time to bring it Up again.

This might also explain why MSE and MO users are ignoring this ( mick's Posts ).

In a way it resembles irrationality issues ; the irrationallity of a Sum of fractions is independant of the first n fractions. And by induction n+1 and by further induction ... You get the idea.

This looks cryptic so
Lets give the clarifying example :


Fake ( exp^[1/2] + 10 x) is off by a factor n^10 or n^10.5 depending on the method used.

Check it yourself.

Fake ( exp(x) + x^2 ) is even worse.

Regards

Tommy1729

-----

This appears to be a mistake and is considered for delete !
Reply
(10/07/2015, 10:17 PM)tommy1729 Wrote: As promised in the binairy partition function thread :

http://math.eretrandre.org/tetrationforu...70#pid8070

I Will explain the connection between fake function theory and the binary partition function.

Lets use Jay's function J(x).

It is clear that J should be a good fake for the binary part function.

Also fake(J(x)) should be close to J(x).

Notice J(x) satisfies all neccessary conditions , even those for conjecture B and TPID 17.

All conditions for any method used so far.

J(x) grows much slower then exp but much faster then exp.

The growth of J(x) is 0. ( this is how tetration relates )

J(x) is close to exp( ln^2 (x) ).
But we do not even need this here.

J ' (x) = t(x) = J(x/2).
....

Remember, we're interested in the function g(x) = ln(f(exp(x)). Here, g(x)=ln(J(exp(x))). Then g'(h_n)=n. That's why you're getting nonsense results. If you do it correctly, using the approximation function:







Then you'll find the entire "fake function" approximation gives ideal results, using the Gaussian approximation. Of course, the partition function itself is non-analytic, so there's a limit to how well you can approximate it. I haven't done the calculations for the partition function itself in a while, but I did them once; I could post the Taylor series again. I find this particular function with to be really interesting for fake functions precisely because it has such a nice closed form, and because the Gaussian approximation for this function is exactly correct! It also turns out this function has an infinite converging Laurent series, and an exactly defined error term too! See very closely related post#85 http://math.eretrandre.org/tetrationforu...3&pid=7413
- Sheldon
Reply
Im on medication since post 153.

Seems to have its effect Smile

My apologies for the last 2 or few more posts.

However the Tommy-Sheldon iterations and quadratic convergeance question still seem solid.

And I still wonder about the zero's of J(x).
Are they like - 2^n (n-1)/4n ?

I should probably post less until im cured.

Sorry.

Regards

Tommy1729
Reply
(10/08/2015, 03:41 AM)sheldonison Wrote:




Yes but this exp ( - n^2 / 4 ) is far from Jay's 2 ^ ( - n (n-1) ) / n !

Its a different base ; exp(- 1/4) =\= 2^(-1).

So this is the worst fit , rather then the best ?

It seems to disprove the conjectures ?!

Or do i need less or more medication ?

Regards

Tommy1729
Reply
Sheldon , in your link you apparantly considered similar things.

But what is that about Laurent series ?
You mention Laurent and then you drop the negative terms ??

Or Maybe it related to the fake ln in the MSE threads.
I assume that.


Anyway.

As Said in the previous post , we seem to have a base problem.

So I reconsider.

I believe exp( ln ^ u ) ~ J is optimal for u = 2.

And i wonder about fake ( d(x) ) = J(x).

So i consider,
a > 1

Fa = exp( ln^a(x) ).

Ga = x^a

Ga ' = a x^(a-1)

Ga ' ^[-1] = (x/a)^(1/(a-1))

So a_n = exp( (n/a)^(a/(a-1)) - n (n/a)^(1/(a-1)) ).

So we get

1 + 1/(a-1) = 2
=> a = 2.

I assume t ' (x) = t(x/w) gives

t(x) ~ exp( ln(x) ^a(w) )

Where 1 + 1/(a(w) -1) = w.

Or t(x) ~ exp( ln(x) ^(1 + 1/(a-1)) )

( im running out of time to decide ).

So does a better estimate for J give a better fake !?
Or not ?

Regards

Tommy1729
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  Another asymptotic development, similar to 2sinh method JmsNxn 0 2,269 07/05/2011, 06:34 PM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)