Posts: 1,176
Threads: 123
Joined: Dec 2010
05/26/2011, 02:50 AM
(This post was last modified: 05/26/2011, 04:48 PM by JmsNxn.)
This proof starts out by considering the differential operator  = \frac{d}{dt}\frac{d^t}{dx^t} f(x)) which is spreadable across addition  + g(x)] = D_t f(x) + D_t g(x)) . And:
 , which is important for this proof.
And next, using traditional fractional calculus laws }{\Gamma(n+1-t)} x^{n-t}) :
which comes to, (if you want me to show you the long work out just ask, I'm trying to be brief)
}{\Gamma(n+1-t)}(\psi_0(n+1-t) - \ln(x))) where ) is the digamma function.
So now we do the fun part:
So we just plug in our formula for  and divide it by n!:
(divide ) by n!.)
we expand these and seperate and rearrange:
And now if you're confused what t represents, you'll be happy to hear we eliminate it now by setting it to equal 0. therefore, all our gammas are factorials and the left hand side becomes e^x by ln(x) and the since digamma function for integers arguments can be expressed through harmonic numbers  = \sum_{k=1}^{n} \frac{1}{k} - \gamma) where  is the euler/mascheroni constant:
I've been unable to properly do the ratio test but using Pari gp it seems to converge for values x >e, but failed at 1000, worked for 900.
I decided to multiply the infinite series and I got:
 = \sum_{n=0}^{\infty} x^n (\sum_{k=0}^{n} (-1)^k \frac{\sum_{c=1}^{n-k}\frac{1}{c} - \gamma}{k!(n-k)!}))
Using Pari gp nothing seems to converge, but that may be fault to may coding.
I'm wondering, has anybody seen this before? And I am also mainly wondering how I can prove the radius of convergence of this. Also, I wonder if this could further suggest the gamma function as the natural extension of the factorial function, since these values do converge.
Posts: 1,616
Threads: 102
Joined: Aug 2007
05/27/2011, 09:00 AM
(This post was last modified: 05/27/2011, 09:02 AM by bo198214.)
I am really wondering whether one can something achieve with fractional differentiation with respect to tetration, sounds quite promising, however
(05/26/2011, 02:50 AM)JmsNxn Wrote: I decided to multiply the infinite series and I got:
 = \sum_{n=0}^{\infty} x^n (\sum_{k=0}^{n} (-1)^k \frac{\sum_{c=1}^{n-k}\frac{1}{c} - \gamma}{k!(n-k)!}))
Using Pari gp nothing seems to converge, but that may be fault to may coding.
this would mean that you can develop the logarithm at 0 into a powerseries, which is not possible.
I guess the problem in your derivations occurs after this line:
(05/26/2011, 02:50 AM)JmsNxn Wrote: }{n!\Gamma(n+1-t)}(\psi_0(n+1-t) - \ln(x)))
I assume this line is still convergent, however if you separate the difference into two sides you work with two divergent series.
Remember  *only* if all (or at least two of the three) limits exists.
Posts: 1,176
Threads: 123
Joined: Dec 2010
05/27/2011, 08:06 PM
(This post was last modified: 05/28/2011, 12:34 AM by JmsNxn.)
(05/27/2011, 09:00 AM)bo198214 Wrote: I guess the problem in your derivations occurs after this line:
(05/26/2011, 02:50 AM)JmsNxn Wrote: }{n!\Gamma(n+1-t)}(\psi_0(n+1-t) - \ln(x)))
I assume this line is still convergent, however if you separate the difference into two sides you work with two divergent series.
Well the next line must be true, because, using pari gp
 = e^{-x} (\sum_{n=0}^{\infty} \frac{x^n}{n!} \psi_0(n+1)))
this series converges, at least for values like ln(20), ln(100), and numbers just greater than e they have very close convergence, maybe 4 to 6 decimal places.
It was this series  = \sum_{n=0}^{\infty} x^n (\sum_{k=0}^{n} (-1)^k \frac{\sum_{c=1}^{n-k}\frac{1}{c} - \gamma}{k!(n-k)!})) that doesn't converge at all.
(05/27/2011, 09:00 AM)bo198214 Wrote: Remember *only* if all (or at least two of the three) limits exists.
I never knew this, I think this would provide error except if we let t = 0 sooner, because:
the right series converges to e^x) absolutely;
the left series' convergence depends on the digamma's function as it approaches infinity... Actually, writing this, I just thought of a neat proof.
and
therefore:
plug in our formula for the euler mascheroni constant
And now I'm stuck.... So I guess it comes to evaluating this to see if the series diverges. at least the first one, i'm still stumped as to why the second one doesn't converge if the first one does.
Is there a similar type of law for products of infinite series that I'm overlooking?
Posts: 1,616
Threads: 102
Joined: Aug 2007
05/28/2011, 09:35 AM
(This post was last modified: 05/28/2011, 09:54 AM by bo198214.)
(05/27/2011, 08:06 PM)JmsNxn Wrote: Well the next line must be true, because, using pari gp
 = e^{-x} (\sum_{n=0}^{\infty} \frac{x^n}{n!} \psi_0(n+1)))
this series converges, at least for values like ln(20), ln(100), and numbers just greater than e they have very close convergence, maybe 4 to 6 decimal places.
This would mean that
e^x) could be developed into a powerseries at 0.
But obviously it still has a singularity there.
This would imply that e^0 = \psi_0(1))
and that the first derivative of e^x) at 0 is ) .
And both is wrong AFAIK, the digamma function is finite on 1 and 2 while all derivatives of ln(x)e^x at 0 are infinite.
But then it seems there must some other error in the deriviations.
Posts: 1,176
Threads: 123
Joined: Dec 2010
I see where you're coming from, but then, why is it converging? Can't we just say:
 = e^{-x}(\sum_{n=0}^{\infty} \frac{x^n}{n!}\psi_0(n+1))\,\,\,\{x| x > a > 0, x,a \in \R\}) I'm betting a is somewhere in [e, 6] range.
I was a little perplexed myself when it converged, because I know that
}) this converges only for integer values of t. This is why I made sure to make t = 0, and not a real value, but still it makes you wonder if a derivative of the growth of something that doesn't converge will converge... but then it does, at least for x>a.
Perhaps I'm making an error somewhere and the convergence is right for the wrong reasons.
Posts: 1,616
Threads: 102
Joined: Aug 2007
(05/29/2011, 02:25 AM)JmsNxn Wrote: I see where you're coming from, but then, why is it converging? Can't we just say:
I'm betting a is somewhere in [e, 6] range.
No, we cant say that. See, if we multiply by e^x on both sides, then the right side must be a powerseries development of ln(x)e^x at 0. This means that at least the coefficients (that are divided by n!) on the right side must be the derivatives of the function ln(x)e^x at 0. But the derivatives are all infinite while the coefficients on the right side are finite. So there will be in no way equality.
I dont know whether the right side converges or not. But even if it converges it is not equal to the left side. And if it converges for some  then it is a powerseries development of some function at 0. This implies (by standard complex analysis) that it converges in an open disk around 0 of a certain radius (in the complex plane; on the real axis just in an interval (-r,r)) and outside the closed disk it can not converge. That means if it converges in [e,6] then it must converge in (-6,6).
Quote:I was a little perplexed myself when it converged, because I know that
this converges only for integer values of t. This is why I made sure to make t = 0, and not a real value, but still it makes you wonder if a derivative of the growth of something that doesn't converge will converge... but then it does, at least for x>a.
Are you sure that it doesnt converge? proof?
But if it doesnt converge then that is another problem. You take the derivative to t from a series that does not converge in a neighborhood of 0. Thats not safe.
But even then its strange that you arrive at an equation that is infinite at the left side and finite at the right side.
Posts: 1,176
Threads: 123
Joined: Dec 2010
the series converges for large values of x, [e, 6] was the value where x begins to converge.
I do not have a proof of its convergence, that's why I posted it here; I only managed to make it this far (I'm not good with limits).
^2\ln(n)})|)
the series should converge as long as  , but I'm pretty sure that limit works out to L = 0, for at least positive real x. I must've made a mistake somewhere therefore.
Perhaps this is a quasi-power series.
 = e^x \ln(x) = \sum_{n=0}^{\infty} \frac{x^n}{n!} \psi_0(n+1)) even though
Right now I'm not sure what else to suggest. This leaves me a little in wonderment.
Posts: 1,616
Threads: 102
Joined: Aug 2007
05/31/2011, 10:30 AM
(This post was last modified: 05/31/2011, 10:30 AM by bo198214.)
(05/31/2011, 03:02 AM)JmsNxn Wrote: I do not have a proof of its convergence
...
)
Hm, lets see, we make a very rough estimation:
Then the radius of convergence must be bigger than the one for replacing ) with n:
But this series has infinite convergence radius, hence the same is true for the series }{n!}) .
Posts: 368
Threads: 44
Joined: Sep 2009
07/04/2011, 04:14 AM
(This post was last modified: 07/04/2011, 04:20 AM by mike3.)
@JmsNxn,
I fed the given series
to the Wolfram Alpha calculator. It says that it does not give the left-hand side, but instead, it gives
(I have no idea how it managed to derive that formula -- any suggestions?)
But this shows the problem. There is an extra term present, which is the upper incomplete gamma function. It decays to 0 as  .
So, your series is asymptotic to, but not equal to, ) . I.e. the first "equation" is really
 \sim \sum_{n=0}^{\infty} \frac{\psi_0(n + 1)}{n!} x^n \quad (x \rightarrow \infty)) .
So obviously, there must be something wrong in the derivation. I think I found it.
In the beginning, you assume  , which, given the definition of your  operator, would mean that  . But that's a problem. There's a catch when working with fractional derivatives. Namely, that they are "non-local". This is similar to how the integral requires a "lower (or upper) bound". Another way to think of choosing the bound is "choosing a branch" of the inverse function. The fractional derivative is a continuous and real-indexed iteration of the derivative, which means it must also include all negative iterates as well -- and those are integrals, so the "non-local" property of the integrals must show up somewhere, and it shows up at every non-nonnegative-integer order of differentiation. In general, fractional-iterate functions that can do real and complex iterates will be multi-valued functions if whatever is being iterated is not injective, and derivative is definitely not injective (differentiate a constant function).
In the theory of fractional derivatives, when you took  , you took the lower bound as  . (See how you have to take this as the lower bound for an integral of the exponential function if you want it to return the exponential function back? Mmm-hmm. Same here.) But, when you took
which you used to build  , and thus the power series, you were taking the lower bound to be 0. That's a funny thing about these formulas -- and in some cases it is not said what the lower bound is. When you then took the power series, it was kind of like saying
and here, the error becomes clear.
Posts: 1,176
Threads: 123
Joined: Dec 2010
07/04/2011, 09:08 PM
(This post was last modified: 07/04/2011, 09:14 PM by JmsNxn.)
Oh! very very fascinating!
I'd read an article on the lower bounds in fractional differentiation and I had been wondering if it would affect this proof. It seems it does.
Now it all makes sense.
But still, I do believe that this is still something interesting. I would like to figure out how wolfram got that equation...
By re-substituting the original proof and keeping the lower bound at 0 we get:
if
 = \frac{d}{dt}\frac{d^t}{dx^t} e^x \neq 0) , where we take the upper zero bound.
Which, given by your wolfram result, seems to imply:
what is the second argument for in this gamma function?
And if I may ask, what occurs with the convergence with this series:
Do we have the same asymptotic development for large x?
If so perhaps we can do something similar to Tommy's 2sinh method?
|