# Tetration Forum

Full Version: What is the convergence radius of this power series?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
This proof starts out by considering the differential operator which is spreadable across addition . And:
, which is important for this proof.

And next, using traditional fractional calculus laws :

which comes to, (if you want me to show you the long work out just ask, I'm trying to be brief)

where is the digamma function.

So now we do the fun part:

So we just plug in our formula for and divide it by n!:

(divide by n!.)

we expand these and seperate and rearrange:

And now if you're confused what t represents, you'll be happy to hear we eliminate it now by setting it to equal 0. therefore, all our gammas are factorials and the left hand side becomes e^x by ln(x) and the since digamma function for integers arguments can be expressed through harmonic numbers where is the euler/mascheroni constant:

I've been unable to properly do the ratio test but using Pari gp it seems to converge for values x >e, but failed at 1000, worked for 900.

I decided to multiply the infinite series and I got:

Using Pari gp nothing seems to converge, but that may be fault to may coding.

I'm wondering, has anybody seen this before? And I am also mainly wondering how I can prove the radius of convergence of this. Also, I wonder if this could further suggest the gamma function as the natural extension of the factorial function, since these values do converge.
I am really wondering whether one can something achieve with fractional differentiation with respect to tetration, sounds quite promising, however

(05/26/2011, 02:50 AM)JmsNxn Wrote: [ -> ]I decided to multiply the infinite series and I got:

Using Pari gp nothing seems to converge, but that may be fault to may coding.

this would mean that you can develop the logarithm at 0 into a powerseries, which is not possible.

I guess the problem in your derivations occurs after this line:
(05/26/2011, 02:50 AM)JmsNxn Wrote: [ -> ]

I assume this line is still convergent, however if you separate the difference into two sides you work with two divergent series.

Remember *only* if all (or at least two of the three) limits exists.
(05/27/2011, 09:00 AM)bo198214 Wrote: [ -> ]I guess the problem in your derivations occurs after this line:
(05/26/2011, 02:50 AM)JmsNxn Wrote: [ -> ]

I assume this line is still convergent, however if you separate the difference into two sides you work with two divergent series.

Well the next line must be true, because, using pari gp

this series converges, at least for values like ln(20), ln(100), and numbers just greater than e they have very close convergence, maybe 4 to 6 decimal places.

(05/27/2011, 09:00 AM)bo198214 Wrote: [ -> ]Remember *only* if all (or at least two of the three) limits exists.

I never knew this, I think this would provide error except if we let t = 0 sooner, because:

the right series converges to absolutely;
the left series' convergence depends on the digamma's function as it approaches infinity... Actually, writing this, I just thought of a neat proof.

and

therefore:

plug in our formula for the euler mascheroni constant

And now I'm stuck.... So I guess it comes to evaluating this to see if the series diverges. at least the first one, i'm still stumped as to why the second one doesn't converge if the first one does.

Is there a similar type of law for products of infinite series that I'm overlooking?
(05/27/2011, 08:06 PM)JmsNxn Wrote: [ -> ]Well the next line must be true, because, using pari gp

this series converges, at least for values like ln(20), ln(100), and numbers just greater than e they have very close convergence, maybe 4 to 6 decimal places.

This would mean that
could be developed into a powerseries at 0.
But obviously it still has a singularity there.
This would imply that
and that the first derivative of at 0 is .

And both is wrong AFAIK, the digamma function is finite on 1 and 2 while all derivatives of ln(x)e^x at 0 are infinite.
But then it seems there must some other error in the deriviations.
I see where you're coming from, but then, why is it converging? Can't we just say:
I'm betting a is somewhere in [e, 6] range.

I was a little perplexed myself when it converged, because I know that
this converges only for integer values of t. This is why I made sure to make t = 0, and not a real value, but still it makes you wonder if a derivative of the growth of something that doesn't converge will converge... but then it does, at least for x>a.

Perhaps I'm making an error somewhere and the convergence is right for the wrong reasons.
(05/29/2011, 02:25 AM)JmsNxn Wrote: [ -> ]I see where you're coming from, but then, why is it converging? Can't we just say:
I'm betting a is somewhere in [e, 6] range.

No, we cant say that. See, if we multiply by e^x on both sides, then the right side must be a powerseries development of ln(x)e^x at 0. This means that at least the coefficients (that are divided by n!) on the right side must be the derivatives of the function ln(x)e^x at 0. But the derivatives are all infinite while the coefficients on the right side are finite. So there will be in no way equality.

I dont know whether the right side converges or not. But even if it converges it is not equal to the left side. And if it converges for some then it is a powerseries development of some function at 0. This implies (by standard complex analysis) that it converges in an open disk around 0 of a certain radius (in the complex plane; on the real axis just in an interval (-r,r)) and outside the closed disk it can not converge. That means if it converges in [e,6] then it must converge in (-6,6).

Quote:I was a little perplexed myself when it converged, because I know that
this converges only for integer values of t. This is why I made sure to make t = 0, and not a real value, but still it makes you wonder if a derivative of the growth of something that doesn't converge will converge... but then it does, at least for x>a.

Are you sure that it doesnt converge? proof?
But if it doesnt converge then that is another problem. You take the derivative to t from a series that does not converge in a neighborhood of 0. Thats not safe.

But even then its strange that you arrive at an equation that is infinite at the left side and finite at the right side.

the series converges for large values of x, [e, 6] was the value where x begins to converge.

I do not have a proof of its convergence, that's why I posted it here; I only managed to make it this far (I'm not good with limits).

the series should converge as long as , but I'm pretty sure that limit works out to L = 0, for at least positive real x. I must've made a mistake somewhere therefore.

Perhaps this is a quasi-power series.
even though

Right now I'm not sure what else to suggest. This leaves me a little in wonderment.

(05/31/2011, 03:02 AM)JmsNxn Wrote: [ -> ]I do not have a proof of its convergence
...

Hm, lets see, we make a very rough estimation:

Then the radius of convergence must be bigger than the one for replacing with n:

But this series has infinite convergence radius, hence the same is true for the series .

@JmsNxn,
I fed the given series

to the Wolfram Alpha calculator. It says that it does not give the left-hand side, but instead, it gives

(I have no idea how it managed to derive that formula -- any suggestions?)

But this shows the problem. There is an extra term present, which is the upper incomplete gamma function. It decays to 0 as .

So, your series is asymptotic to, but not equal to, . I.e. the first "equation" is really

.

So obviously, there must be something wrong in the derivation. I think I found it.

In the beginning, you assume , which, given the definition of your operator, would mean that . But that's a problem. There's a catch when working with fractional derivatives. Namely, that they are "non-local". This is similar to how the integral requires a "lower (or upper) bound". Another way to think of choosing the bound is "choosing a branch" of the inverse function. The fractional derivative is a continuous and real-indexed iteration of the derivative, which means it must also include all negative iterates as well -- and those are integrals, so the "non-local" property of the integrals must show up somewhere, and it shows up at every non-nonnegative-integer order of differentiation. In general, fractional-iterate functions that can do real and complex iterates will be multi-valued functions if whatever is being iterated is not injective, and derivative is definitely not injective (differentiate a constant function).

In the theory of fractional derivatives, when you took , you took the lower bound as . (See how you have to take this as the lower bound for an integral of the exponential function if you want it to return the exponential function back? Mmm-hmm. Same here.) But, when you took

which you used to build , and thus the power series, you were taking the lower bound to be 0. That's a funny thing about these formulas -- and in some cases it is not said what the lower bound is. When you then took the power series, it was kind of like saying

and here, the error becomes clear.
Oh! very very fascinating!

I'd read an article on the lower bounds in fractional differentiation and I had been wondering if it would affect this proof. It seems it does.

Now it all makes sense.

But still, I do believe that this is still something interesting. I would like to figure out how wolfram got that equation...

By re-substituting the original proof and keeping the lower bound at 0 we get:

if
, where we take the upper zero bound.

Which, given by your wolfram result, seems to imply:

what is the second argument for in this gamma function?

And if I may ask, what occurs with the convergence with this series:

Do we have the same asymptotic development for large x?

If so perhaps we can do something similar to Tommy's 2sinh method?