• 3 Vote(s) - 4.33 Average
• 1
• 2
• 3
• 4
• 5
 Searching for an asymptotic to exp[0.5] tommy1729 Ultimate Fellow Posts: 1,493 Threads: 356 Joined: Feb 2009 09/14/2014, 05:07 PM (09/13/2014, 11:49 PM)jaydfox Wrote: $ \exp(x) \approx \sum_{k=-\infty}^{\infty}\frac{x^{k+\beta}}{\Gamma(k+\beta+1)}$ Notice that I put "approximately equal". I haven't checked, but I assume it's exactly equal, but only in the sense that it should satisfy the functional equation exp'(x) = exp(x). Now, set beta = 1/2 and truncate the negative powers of x: $ \exp(x) \approx \sum_{k=0}^{\infty}\frac{x^{k+1/2}}{\Gamma(k+3/2)}$ Hey I wanted to post that ! Btw the equation is f ' (x) = f(x) + o(1) for x sufficiently large. I had the exact same proof ... I knew I was running out of time since it looked so familiar ! Did's answer on MSE is the classical way to prove it. There are many proofs of this actually. But Jay posted the shortest I think. Gauss and Euler must have known this already. ... However I still notice nobody has given a justification for sheldon's integral. This is the second time it gives a correct solution. It would also give the same solution for exp(x)ln(x+1) + exp(x)sqrt(x). In general using sheldon's integral we can say If +fake( f ) and +fake ( g ) exist then +fake( f + g ) = +fake( f ) + +fake( g ) which makes sense. One could generalize If +fake( ln f ) and +fake ( ln g ) exist then +fake( f * g ) = exp ( +fake( ln f ) + +fake( ln g ) ) Although the RHS is not optimal then and not equal to the integral. I wonder what post 9 means to sums and products of f and g. Still thinking. regards tommy1729 tommy1729 Ultimate Fellow Posts: 1,493 Threads: 356 Joined: Feb 2009 09/14/2014, 09:34 PM (This post was last modified: 11/06/2014, 12:00 AM by tommy1729.) WARNING : CORRECTED IN POST 117 ! --- Using post 9 for exp(x) sqrt(x) : ln(a_n x^n) < ln(exp(x)sqrt(x)) ... ln(a_n) = min ( exp(x) - (n-1/2) x ) -- d/dx [exp(x) - (n-1/2) x] = exp(x) - (n - 1/2) => exp(x) = n - 1/2 x = ln(n - 1/2) -- ln(a_n) = exp(ln(n - 1/2)) - (n - 1/2) ln(n - 1/2) => a_n = exp( n - 1/2 - (n - 1/2) ln(n - 1/2) ) a_n = exp( (n - 1/2) (1 - ln(n - 1/2)) ) ... Gamma(n + 1/2) vs exp( - (n - 1/2) (1 - ln(n - 1/2)) ) => Loggamma(n + 1/2) vs - (n - 1/2) (1 - ln(n - 1/2)) => Loggamma(z) vs - z ( 1 - ln(z) ) Loggamma(z) vs z ( ln(z) - 1 ) => Lim loggamma(z) / ( z ( ln(z) - 1 ) ) < Constant ? From the Stirling series we know that the limit equals 1. This implies that using post 9 also gives the correct solution up to a multiplicative constant ! So the post 9 method does a good job to estimate the fake exp(x)sqrt(x). regards tommy1729 --- WARNING : CORRECTED IN POST 117 ! sheldonison Long Time Fellow Posts: 683 Threads: 24 Joined: Oct 2008 09/15/2014, 03:53 AM (This post was last modified: 09/15/2014, 04:33 AM by sheldonison.) (09/13/2014, 11:49 PM)jaydfox Wrote: ... Treating Gamma(k) at negative non-positive integers as infinity, and the reciprocal of such as zero, we can take the limit from negative to positive infinity. And we can replace k with (k+b), where b is zero in the original solution, but can now be treated as any real... $ \exp(x) \approx \sum_{k=-\infty}^{\infty}\frac{x^{k+\beta}}{\Gamma(k+\beta+1)}$ Notice that I put "approximately equal". I haven't checked, but I assume it's exactly equal, but only in the sense that it should satisfy the functional equation exp'(x) = exp(x). So we also have a fakeexp(z) function, and I especially like the simplicity of f'(x)=f(x) in the non-converging limit to explain why f(x)~exp(x). Here, going back to k=0.5, for simplicity.... We have to put some bounds on k, since the infinite Laurent series does not converge anywhere. $ \exp(x) \sim f(x) = \sum_{k=0}^{\infty}\frac{x^{k+0.5}}{\Gamma(k+0.5+1)}$ $\frac{d}{dx}f(x)=\sum_{k=-1}^{\infty} \frac{x^{k+0.5}}{\Gamma(k+1.5)} =f(x) + \frac{x^{-0.5}}{\Gamma(0.5)}$ As $\Re(x)$ gets arbitrarily large, the error term becomes more and more insignificant, relative to the exp(z) term, and the number of terms you can include also increases, until the $\frac{x^{-n+0.5}}{\Gamma(-n+1.5)}$ starts growing in magnitude as n gets bigger negative .... - Sheldon tommy1729 Ultimate Fellow Posts: 1,493 Threads: 356 Joined: Feb 2009 09/16/2014, 12:14 PM Gösta Mittag-Leffler. That is this post in a nutshell. When I said it looked familiar it was the Mittag-Leffler function. And that is also used for Mittag-Leffler summation. Mittag-Leffler theorem is also relevant. Mittag-Leffler has occured here on this forum before. For instance when trying to get a series expansion larger than a radius , with for instance rational functions. This resembles my next idea ... ( which will be explained in next posts ) Actually my next idea came first , and that is how I suddenly remembered Mittag-Leffler. regards tommy1729 tommy1729 Ultimate Fellow Posts: 1,493 Threads: 356 Joined: Feb 2009 09/16/2014, 12:27 PM Ok my next idea. Im having ideas to prove the validity of the Cauchy integral for fake function theory. Some conditions first : Lets call our real-analytic function f(z) of which we want a +fake. 1) there needs to be an annulus around the origin that contains at most one branch. 2) the Riemann surface needs to be "well-connected". As example : log(z^3) log(z^5) is not well connected. Plot it near the origin to see it. 3) f(z) has no essential singularity. 4) f(x) , f ' (x) , f " (x) > 0 for x > 0 Now we use an old idea of me f(z) = +Taylor_1(z) + +Taylor_2(z/(z+a_1)) + +Taylor_3(z/(z+a_2)) where +Taylor means a Taylor series with positive real coefficients. a_1,a_2 are selected positive reals. There series expansions MUST have a +fake described by the Cauchy. to be continued. regards tommy1729 tommy1729 Ultimate Fellow Posts: 1,493 Threads: 356 Joined: Feb 2009 09/18/2014, 10:20 PM Sheldon , when discussing post 9 , you started using second derivatives as well. what is the motivation , reasoning and justification of that ? Now I think if x>0 => f(x) , f ' (x) > 0 then a_n x^n < f(x) - f(0) is a good equation. if we additionally have x > 0 => f " (x) > 0 then a_n x^n < f(x) - f(0) n a_n x^(n-1) < f ' (x) - f ' (0) seems a good system of equations. Analogue if we also have f "' (x) > 0. etc etc. We can find extrema of f ' (x) - f ' (0) by considering f " (x). ( the analogue of the classical consideration of f ' (x) to find the min as done in post 9 ) However that does not seem what you had in mind , or was it ? I feel a bit silly asking this question. Its probably trivial. regards tommy1729 tommy1729 Ultimate Fellow Posts: 1,493 Threads: 356 Joined: Feb 2009 09/18/2014, 11:07 PM (This post was last modified: 09/18/2014, 11:23 PM by tommy1729.) Forgive me for not using tex again. On my to do list are considering different solution to fake(x^x) fake(x^ln(x)) fake(exp(x) ln(x)^2) fake(x^sqrt(x)) But for now I was mainly intrested in : fake(Gamma(x+2)) In particular we can use for instance the the fake log or fake sqrt results here ! fake Gamma(x+2) = integral_0^oo fake( exp(t) t^(x+1) ) exp(-2t) dt where fake ( exp(t) t^(x+1) ) = Mittag(t,1,x+1) as obtained before. ( fake exp(x)sqrt(x) = Mittag(x,1,1/2) as example ) Notice the almost self-similarity , Mittag depends on the gamma function ! ( A tempting idea is to replace the gamma in the Mittag function with fake gamma itself ?! ) Remember that there is analytic continuation for the integral ! Also if we use the Cauchy integral on integral_0^oo fake( exp(t) t^(x+1) ) exp(-2t) dt Then we have some fine looking calculus expression imho. Many tricks from standard calculus could probably be used such as interchanging the integrals , or (interchanging) a sum and an integral , or feynman integration etc. and of course contour integration techniques/theorems. Combining integral transforms with fake function theory seems intresting both for computation and theory. Many complicated functions can be given by contour integration of simpler ones , and those contours can be written as simpler integral transforms. Those transforms can then contain a sqrt or ln or such and therefore we finally arrive at a fake function of the Original complicated one. - For instance g(z) , the functional inverse of a function f(z) such that g(z) grows fast enough ( faster then poly ) and g(z) cannot be given by elementary functions - Probably fake( Gamma(x+2) ) has a closed form or its derivatives at 0 have a closed form. There has already been research done into asymptotics to the gamma function but not like this. Probably the Gamma function will popularize fake function theory. Generalizing recursion and/or functional equations into fake function theory might be the next step. The number of roads this leads too is uncountable ! Next on the list is fake( exp(x) x / ln^2(x) ) exp(-x) without using a fake ln. regards tommy1729 tommy1729 Ultimate Fellow Posts: 1,493 Threads: 356 Joined: Feb 2009 09/19/2014, 12:23 PM (This post was last modified: 11/03/2014, 10:07 PM by tommy1729.) Time for tommy's Q9 method. The term Q refers to q-analogue as will be clear soon. The 9 refers to post 9. In post 9 we tried to find a good fake(f(x)) by using a_n x^n < f(x). We arrived at a solution that had the property a_0 >= a_1 >= a_2 >= a_3 >= ... >= 0 (*) We can use this property to find a better method. a_0 + a_1 x + a_2 x^2 + ... + a_n x^n < f(x) Using (*) a_n + a_n x + a_n x^2 + ... a_n x^n < f(x) Simplify a_n (x^(n+1) - 1) / (x-1) < f(x) a_n (x^(n+1))/(x-1) < f(x) + a_n/(x-1) For x large we can ignore the a_n/(x-1) term on the RHS , that might give a worse fake for small x , but has almost no effect on the large x. a_n (x^(n+1))/(x-1) < f(x) a_n x^n x/(x-1) < f(x) a_n x^n < f(x) (x-1)/x ln(a_n) + n ln (x) < ln(f(x)) + ln(x-1) - ln(x) ln(a_n) < ln(f(x)) + ln(x-1) - (n+1) ln(x) ln(a_n) < Min( ln(f(x)) + ln(x-1) - (n+1) ln(x) ) This is an improvement. Celebrate. regards tommy1729 tommy1729 Ultimate Fellow Posts: 1,493 Threads: 356 Joined: Feb 2009 09/29/2014, 11:40 PM Im considering fake function theory for parabolic fixpoints. regards tommy1729 tommy1729 Ultimate Fellow Posts: 1,493 Threads: 356 Joined: Feb 2009 10/19/2014, 04:02 PM (This post was last modified: 11/03/2014, 01:25 PM by tommy1729.) An idea that is very very old. The connection between series multisection and fake function theory. An example says more than a 1000 pictures. Consider f(x) = 1 + x + x^2/2! + x^3/3! - x^4/4! + ... where the sign pattern continues as +,+,+,- such that every multiple of 4 gives a minus sign. If you ask someone to estimate f(x) for x > 0 , they will likely say f(x) ~ 1/4 + 1/2 exp(x) + C for some small real C. Now the logical questions are how good is this estimate really ... in other words a deeper study. Clearly this relates to the mittag leffler function and the classic formula for series multisection that uses roots of unity. But more relevant here is fake f(x) ~ 1/4 + 1/2 exp(x) + C ?? How close to the truth is that ? How good does fake function theory estimate here ? Is fake function theory the ultimate method for this , or is it weak ? Also notice the alternative estimates 1 + sinh(x) or cosh(x) Who also have positive derivatives. --- The differential equations d^n f / d^n x = f(x) are also often considered because of the natural connection. --- These questions seems very reasonable and solvable. Generalized questions and answers are therefore very likely to exist. regards tommy1729 " Together we can do more " tommy1729 « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post Using a family of asymptotic tetration functions... JmsNxn 15 4,630 08/06/2021, 01:47 AM Last Post: JmsNxn Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 891 07/22/2021, 03:37 AM Last Post: JmsNxn A Holomorphic Function Asymptotic to Tetration JmsNxn 2 1,331 03/24/2021, 09:58 PM Last Post: JmsNxn An asymptotic expansion for \phi JmsNxn 1 1,154 02/08/2021, 12:25 AM Last Post: JmsNxn Merged fixpoints of 2 iterates ? Asymptotic ? [2019] tommy1729 1 3,553 09/10/2019, 11:28 AM Last Post: sheldonison Another asymptotic development, similar to 2sinh method JmsNxn 0 4,101 07/05/2011, 06:34 PM Last Post: JmsNxn

Users browsing this thread: 2 Guest(s)