10/05/2010, 11:40 AM

Hi.

I've recently discovered what may be a big problem with the whole continuum sum thing.

Namely, it has to do with singularities. It is possible to prove the Fourier continuum sum is well-behaved in the case of entire functions, that is, it is linear and unique, i.e. independent of the sequence of periodic functions used to approximate the aperiodic entire function.

However, weird things start to happen with singularities that complicate the matter. We don't need tetration to explore this, we can do it with the simple, seemingly very innocent-looking reciprocal function, , which has a single, simple little pole.

Let us consider continuum summation of this. There is a well-known formula that gives the indefinite continuum sum. It is:

where is the digamma function. Yet there is another function of equally "good" analytic behavior that is not a mere displacement of the above by a constant (but instead by a 1-cyclic function!). By "good", I mean it's not something weird like or something contrived and wobbly and all that (yes, I know these aren't rigorous terms.). Instead, it's just:

.

Numerically, it seems we can get both of these solutions with the Fourier method by choosing the Fourier series for the periodic approximations to either the "left" or "right" of the singularity.

The difference in behavior between these two is striking: the former is defined at all nonnegative integers, and has singularities at all negative integers, the latter is precisely the opposite.

You might think, well, OK, then we just choose a convention, that we use the one above, instead of that below. But then we get into problems. Consider the function

We can sum using the same digamma function in two ways -- just translate one of the given solutions by 2. Namely,

So then we could build up stuff like

.

Now the middle two look to be equal, so that gives three solutions. The first has singularities going to the left, while the middle has them going both left and right, and the last has them going only to the right.

Now this is where it gets bad. It would seem "natural" to consider that if we started the lower bound of the definite continuum sum as being such that the singularity is "ahead" of it, that the singularities should trail off to the right, while if it was the other way, they should trail off to the left. But if we consider, say, and vary so the singularity moves through the complex plane from the "left" of the continuum sum lower bound to the "right", then we'd have to say that at some arbitrary point the line of singularities "jumps" from left to right. Furthermore, it would destroy the translation property

and the sum operator may not even be linear anymore.

I also made an interesting observation here. The sum operator that we know is actually just the (an) inverse of the unit forward difference operator . But there is no reason the d.o. need be unit-step. Consider . In this case, for , we have . Now consider the difference quotient operator . The inverse will be . For our function, this gives .

As , we see that this last function approaches something much more familiar: , which is the integral of on the complex -plane. This is, of course, what we would expect. But the purpose of this exercise isn't so much to see it turn to the integral, as it is to see what happens to all those extra singularities in the continuum sum, as the continuum sum transitions into the integral. What do they do? They coalesce into a branch cut, the branch cut from to of the familiar principal logarithm.

What if we did the above but with the other solution, where the singularities go to the right? Then it would once again approach a logarithm, only this time one offset by the constant and the poles coalesce into a branch cut going to the right from to . Thus in the limiting case of the integral, the result is truly unique, being essentially the same multivalued object just translated or otherwise modified in a trivial manner. Note, however, that we can also do this by taking to be a negative real, and also make the singularities go in any direction we want by taking to be a complex number, but then these no longer correspond to continuum sums. It seems that for continuum sums, only left or right, parallel to the real axis, makes any sense.

This little exploration suggests the interpretation of the infinite sequence of poles in the continuum sums as an "immature" branch cut or "proto-branch cut" -- indeed, the entire continuum sum could be thought of as a sort of "immature" or perhaps "deliberately left imperfect" version of the integral, which it is in a way (and in like manner, so the difference operator itself is a sort of imperfect derivative) -- it is an approximation of the integral by rectangles of unit width when its input is an integer. However in this "immature" state, moving them would result in a change of the values of the function everywhere.

Anyway, this problem seems difficult to handle. One could make an argument that, if the singularities of a function are confined to half the z-plane, that is, for for some real , is purely holomorphic, then we should take the singularities as going "backwards", and perhaps this should be the case in general, since the difference operator "looks ahead" by 1, that at a given point any singularity that results should come from one "in the future" in the function being continuum summed, that is, from one toward the right, implying lines of singularities extending to the left. But ultimately it is still somewhat arbitrary, an unfortunate disappointment. It also would seem to imply that the tangent function does not have a principal continuum sum

As an aside, looking up "coalescence of poles" yielded this interesting page, I don't know if it has any relevance here but it also illustrates this phenomenon:

http://www.math.ohio-state.edu/~gerlach/...de111.html

and also, Wikipedia has something:

http://en.wikipedia.org/wiki/Branch_poin...m_of_poles

and this example does indeed demonstrate this in a quite fascinating way.

I've recently discovered what may be a big problem with the whole continuum sum thing.

Namely, it has to do with singularities. It is possible to prove the Fourier continuum sum is well-behaved in the case of entire functions, that is, it is linear and unique, i.e. independent of the sequence of periodic functions used to approximate the aperiodic entire function.

However, weird things start to happen with singularities that complicate the matter. We don't need tetration to explore this, we can do it with the simple, seemingly very innocent-looking reciprocal function, , which has a single, simple little pole.

Let us consider continuum summation of this. There is a well-known formula that gives the indefinite continuum sum. It is:

where is the digamma function. Yet there is another function of equally "good" analytic behavior that is not a mere displacement of the above by a constant (but instead by a 1-cyclic function!). By "good", I mean it's not something weird like or something contrived and wobbly and all that (yes, I know these aren't rigorous terms.). Instead, it's just:

.

Numerically, it seems we can get both of these solutions with the Fourier method by choosing the Fourier series for the periodic approximations to either the "left" or "right" of the singularity.

The difference in behavior between these two is striking: the former is defined at all nonnegative integers, and has singularities at all negative integers, the latter is precisely the opposite.

You might think, well, OK, then we just choose a convention, that we use the one above, instead of that below. But then we get into problems. Consider the function

We can sum using the same digamma function in two ways -- just translate one of the given solutions by 2. Namely,

So then we could build up stuff like

.

Now the middle two look to be equal, so that gives three solutions. The first has singularities going to the left, while the middle has them going both left and right, and the last has them going only to the right.

Now this is where it gets bad. It would seem "natural" to consider that if we started the lower bound of the definite continuum sum as being such that the singularity is "ahead" of it, that the singularities should trail off to the right, while if it was the other way, they should trail off to the left. But if we consider, say, and vary so the singularity moves through the complex plane from the "left" of the continuum sum lower bound to the "right", then we'd have to say that at some arbitrary point the line of singularities "jumps" from left to right. Furthermore, it would destroy the translation property

and the sum operator may not even be linear anymore.

I also made an interesting observation here. The sum operator that we know is actually just the (an) inverse of the unit forward difference operator . But there is no reason the d.o. need be unit-step. Consider . In this case, for , we have . Now consider the difference quotient operator . The inverse will be . For our function, this gives .

As , we see that this last function approaches something much more familiar: , which is the integral of on the complex -plane. This is, of course, what we would expect. But the purpose of this exercise isn't so much to see it turn to the integral, as it is to see what happens to all those extra singularities in the continuum sum, as the continuum sum transitions into the integral. What do they do? They coalesce into a branch cut, the branch cut from to of the familiar principal logarithm.

What if we did the above but with the other solution, where the singularities go to the right? Then it would once again approach a logarithm, only this time one offset by the constant and the poles coalesce into a branch cut going to the right from to . Thus in the limiting case of the integral, the result is truly unique, being essentially the same multivalued object just translated or otherwise modified in a trivial manner. Note, however, that we can also do this by taking to be a negative real, and also make the singularities go in any direction we want by taking to be a complex number, but then these no longer correspond to continuum sums. It seems that for continuum sums, only left or right, parallel to the real axis, makes any sense.

This little exploration suggests the interpretation of the infinite sequence of poles in the continuum sums as an "immature" branch cut or "proto-branch cut" -- indeed, the entire continuum sum could be thought of as a sort of "immature" or perhaps "deliberately left imperfect" version of the integral, which it is in a way (and in like manner, so the difference operator itself is a sort of imperfect derivative) -- it is an approximation of the integral by rectangles of unit width when its input is an integer. However in this "immature" state, moving them would result in a change of the values of the function everywhere.

Anyway, this problem seems difficult to handle. One could make an argument that, if the singularities of a function are confined to half the z-plane, that is, for for some real , is purely holomorphic, then we should take the singularities as going "backwards", and perhaps this should be the case in general, since the difference operator "looks ahead" by 1, that at a given point any singularity that results should come from one "in the future" in the function being continuum summed, that is, from one toward the right, implying lines of singularities extending to the left. But ultimately it is still somewhat arbitrary, an unfortunate disappointment. It also would seem to imply that the tangent function does not have a principal continuum sum

As an aside, looking up "coalescence of poles" yielded this interesting page, I don't know if it has any relevance here but it also illustrates this phenomenon:

http://www.math.ohio-state.edu/~gerlach/...de111.html

and also, Wikipedia has something:

http://en.wikipedia.org/wiki/Branch_poin...m_of_poles

and this example does indeed demonstrate this in a quite fascinating way.