Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Continuum sums -- a big problem and some interesting observations
#1
Hi.

I've recently discovered what may be a big problem with the whole continuum sum thing.

Namely, it has to do with singularities. It is possible to prove the Fourier continuum sum is well-behaved in the case of entire functions, that is, it is linear and unique, i.e. independent of the sequence of periodic functions used to approximate the aperiodic entire function.

However, weird things start to happen with singularities that complicate the matter. We don't need tetration to explore this, we can do it with the simple, seemingly very innocent-looking reciprocal function, , which has a single, simple little pole.

Let us consider continuum summation of this. There is a well-known formula that gives the indefinite continuum sum. It is:



where is the digamma function. Yet there is another function of equally "good" analytic behavior that is not a mere displacement of the above by a constant (but instead by a 1-cyclic function!). By "good", I mean it's not something weird like or something contrived and wobbly and all that (yes, I know these aren't rigorous terms.). Instead, it's just:

.

Numerically, it seems we can get both of these solutions with the Fourier method by choosing the Fourier series for the periodic approximations to either the "left" or "right" of the singularity.

The difference in behavior between these two is striking: the former is defined at all nonnegative integers, and has singularities at all negative integers, the latter is precisely the opposite.

You might think, well, OK, then we just choose a convention, that we use the one above, instead of that below. But then we get into problems. Consider the function



We can sum using the same digamma function in two ways -- just translate one of the given solutions by 2. Namely,




So then we could build up stuff like




.

Now the middle two look to be equal, so that gives three solutions. The first has singularities going to the left, while the middle has them going both left and right, and the last has them going only to the right.

Now this is where it gets bad. It would seem "natural" to consider that if we started the lower bound of the definite continuum sum as being such that the singularity is "ahead" of it, that the singularities should trail off to the right, while if it was the other way, they should trail off to the left. But if we consider, say, and vary so the singularity moves through the complex plane from the "left" of the continuum sum lower bound to the "right", then we'd have to say that at some arbitrary point the line of singularities "jumps" from left to right. Furthermore, it would destroy the translation property



and the sum operator may not even be linear anymore.

I also made an interesting observation here. The sum operator that we know is actually just the (an) inverse of the unit forward difference operator . But there is no reason the d.o. need be unit-step. Consider . In this case, for , we have . Now consider the difference quotient operator . The inverse will be . For our function, this gives .

As , we see that this last function approaches something much more familiar: , which is the integral of on the complex -plane. This is, of course, what we would expect. But the purpose of this exercise isn't so much to see it turn to the integral, as it is to see what happens to all those extra singularities in the continuum sum, as the continuum sum transitions into the integral. What do they do? They coalesce into a branch cut, the branch cut from to of the familiar principal logarithm.

What if we did the above but with the other solution, where the singularities go to the right? Then it would once again approach a logarithm, only this time one offset by the constant and the poles coalesce into a branch cut going to the right from to . Thus in the limiting case of the integral, the result is truly unique, being essentially the same multivalued object just translated or otherwise modified in a trivial manner. Note, however, that we can also do this by taking to be a negative real, and also make the singularities go in any direction we want by taking to be a complex number, but then these no longer correspond to continuum sums. It seems that for continuum sums, only left or right, parallel to the real axis, makes any sense.

This little exploration suggests the interpretation of the infinite sequence of poles in the continuum sums as an "immature" branch cut or "proto-branch cut" -- indeed, the entire continuum sum could be thought of as a sort of "immature" or perhaps "deliberately left imperfect" version of the integral, which it is in a way (and in like manner, so the difference operator itself is a sort of imperfect derivative) -- it is an approximation of the integral by rectangles of unit width when its input is an integer. However in this "immature" state, moving them would result in a change of the values of the function everywhere.

Anyway, this problem seems difficult to handle. One could make an argument that, if the singularities of a function are confined to half the z-plane, that is, for for some real , is purely holomorphic, then we should take the singularities as going "backwards", and perhaps this should be the case in general, since the difference operator "looks ahead" by 1, that at a given point any singularity that results should come from one "in the future" in the function being continuum summed, that is, from one toward the right, implying lines of singularities extending to the left. But ultimately it is still somewhat arbitrary, an unfortunate disappointment. It also would seem to imply that the tangent function does not have a principal continuum sum Sad

As an aside, looking up "coalescence of poles" yielded this interesting page, I don't know if it has any relevance here but it also illustrates this phenomenon:

http://www.math.ohio-state.edu/~gerlach/...de111.html

and also, Wikipedia has something:

http://en.wikipedia.org/wiki/Branch_poin...m_of_poles

and this example does indeed demonstrate this in a quite fascinating way.
Reply
#2
quote " periodic approximations to either the "left" or "right" of the singularity "

a function is periodic in 2 directions ?? i dont know what you mean by the above.

if f(x+k) = f(x) then that function also satisfies f(x-k) = f(x) and is at least periodic in direction k and -k ...

im sure you meant something meaningfull but its a mystery to me.

plz be more formal.
Reply
#3
Yeah, that part wasn't right. What I meant was Fourier expansions of periodic approximations, like taking this:



(a periodic approximation function for the given function)

which has imaginary period , then expand it either as a Fourier series along a line like , which is to the "right" of the singularity (or singularities when dealing with the approximations), or expand it along one like , which is to the "left", then continuum-sum one of those Fourier series and take the limit at infinite period. When the resulting functions are analytically continued by the continuum-sum recurrence equations to the whole plane, they should yield continuum sums with singularities going to the left and right, respectively.
Reply
#4
continuum-sum recurrence equations ?

you mean f(z+1) = f(z) + delta f(z) , where delta is the antisum ?

--


why not turn the fourier series into taylor and do ordinary analytic continuation. mittag leffler ?

--

or do they give different results ? i dont think so.

--

what if we take the fourier series at I(z) = 0 ?

--


so the problems occur when we have 2 fourier series expanded on different lines that are not entire and require continuation ?

is it true that if the radiuses intersect , the problem cannot occur ?

is it false that when both are entire they have to agree ?

--

when are there complex continu solutions that satisfy both fourier expansions ?

--

i dont get your digamma(-z) argument ...


sorry if i ask trivial questions.
Reply
#5
You ask a ton of questions but I'll try my best...

(10/06/2010, 12:44 PM)tommy1729 Wrote: continuum-sum recurrence equations ?

you mean f(z+1) = f(z) + delta f(z) , where delta is the antisum ?

Yes. Or , where is the continuum sum of . And of course, the opposite, i.e. that .

(10/06/2010, 12:44 PM)tommy1729 Wrote: why not turn the fourier series into taylor and do ordinary analytic continuation. mittag leffler ?

or do they give different results ? i dont think so.

The Fourier is only used to sum the approximations -- the limit is the continuum sum of the aperiodic function. But if you want to extend those approximations to the whole plane, or even the limiting continuum sum, why not use the difference equation? The Fourier converges in a strip, the Taylor only in a disk. And the method by which the continuation is accomplished does not really matter from the theoretical side, in where only the continuation itself is relevant.

(10/06/2010, 12:44 PM)tommy1729 Wrote: what if we take the fourier series at I(z) = 0 ?

You mean like taking a sequence of periodic approximations with real period (which is what that is equivalent to)? There are some problems there, namely that the continuum sum fails for a harmonic that is 1-periodic. This means all integer periods are out of the question. Though sequences of periodic approximations of increasing period which there are no subsequences of periods which approach an integer period might work. That last requirement can be formulated as "there must be some for which there is no such that ".

(10/06/2010, 12:44 PM)tommy1729 Wrote: so the problems occur when we have 2 fourier series expanded on different lines that are not entire and require continuation ?

Yeah, when the function is not entire. Take a periodic approximation (or periodic function with singularities), expand it on one side of the singularity, then expand it on the other and compare the continuum sums.

(10/06/2010, 12:44 PM)tommy1729 Wrote: is it true that if the radiuses intersect , the problem cannot occur ?

is it false that when both are entire they have to agree ?

For a given periodic function, such as a periodic approximation, a Fourier series can only be generated along the line of periodicity (that is, lines in the plane with slope equal to the slope of the complex period vector, parameterized by , where is the period and is an arbitrary complex constant.). If the strips of convergence intersect then that means there was no singularity between the lines of expansion, i.e. they were all on one or the other side of the singularity. In that case, both will give the same result. It's if they're on different sides that there's a problem. This means the continuum sum of the given periodic function is ambiguous in a way that is not easily resolved (continuum sum is ambiguous by nature, but the Fourier series provides a "natural" definition for it applied to periodic functions, at least ones that are entire or otherwise singularity-free).

(10/06/2010, 12:44 PM)tommy1729 Wrote: when are there complex continu solutions that satisfy both fourier expansions ?

What does that mean?

(10/06/2010, 12:44 PM)tommy1729 Wrote: i dont get your digamma(-z) argument ...


sorry if i ask trivial questions.

It shows the two equally "good" continuum sums that come from considering "summing to the right" of the singularities and "summing to the left".
Reply
#6
The same is true for integrals: if you take



you can get different functions wich differ not only by a constant. The convention here is to count the solution as but this is only one of possible solutions. Another for example is

You can also claim that there are two solutions ln x and ln (-x) which suitable only for one half of the domain of definition. In fact neither of them is the solution for the function on the whole real axis.

By analogy with logarithm it is reasonable to count that



and

.

This is a graph of how it looks like:
[Image: h_1286684374_10064fb295.png]

Just use absolute values and you'll get a compact form for the solution. For integrals the conventional form is determined by the Cauchy principal value.
Reply
#7
(10/05/2010, 11:40 AM)mike3 Wrote: I mean it's not something weird like or something contrived and wobbly and all that (yes, I know these aren't rigorous terms.). Instead, it's just:

.

And note also that



so the both solutions only differ by a periodic function (which can be filtered out on each continous range separately in the process of finding the natural value by requiring monotonous derivatives of higher order for example).

Reply
#8
(10/09/2010, 03:13 PM)Ansus Wrote: The same is true for integrals: if you take



you can get different functions wich differ not only by a constant. The convention here is to count the solution as but this is only one of possible solutions. Another for example is

However, I have an eye toward the complex plane, with holomorphic functions (or multi-functions). It is true that, say, could equal , which is akin to the whole thing for the sum, but this function analytically continues in the complex plane to what is, essentially, just another branch of , just shifted by a constant shift of . When continued to a multifunction, there is no difference between and except a constant shift.

Reply
#9
In this case also the solutions differ only by a periodic funtion. For the purposes of complex analysis you can choose the solution which is monotonous and continous with its derivatives along the positive real axis (but have in mind that for negative axis it is not the "best" solution).
Reply
#10
This is alike choosing starting point (lower limit) for the summation. If the start is on the right of the singularity, you have contnous function on the right but discontinous once you jumped over the singularity (as you sum it up in each unit it is natural that as you add up infinity, it should appear now on on each your step).

If you start left of the zero, you'll have continous function until you cross zero where you add up singularity again.

So it would be more precise to say that we choose not a constant term but lower limit for the sum. If



then it means we fixed u at the point solution of



u =1.461632144...

For there is no such real u.
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  Very interesting topic Ansus 0 60 10/01/2019, 08:14 PM
Last Post: Ansus
  THE problem with dynamics tommy1729 1 2,047 04/04/2017, 10:52 PM
Last Post: tommy1729
  [sci.math] halting problem skeptic. tommy1729 0 1,574 05/30/2015, 09:24 PM
Last Post: tommy1729
  2015 Continuum sum conjecture tommy1729 3 3,080 05/26/2015, 12:24 PM
Last Post: tommy1729
  Problem with cauchy method ? tommy1729 0 1,708 02/16/2015, 01:51 AM
Last Post: tommy1729
  Another way to continuum sum! JmsNxn 6 6,026 06/06/2014, 05:09 PM
Last Post: MphLee
  Further observations on fractional calc solution to tetration JmsNxn 13 12,875 06/05/2014, 08:54 PM
Last Post: tommy1729
  Remark on Gottfried's "problem with an infinite product" power tower variation tommy1729 4 4,770 05/06/2014, 09:47 PM
Last Post: tommy1729
  On naturally occuring sums tommy1729 0 1,808 10/24/2013, 12:27 PM
Last Post: tommy1729
  Continuum sum = Continuum product tommy1729 1 2,430 08/22/2013, 04:01 PM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)