Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
@ mike 3 (sum)
#1
hey mike

i wonder what you think about the below about summing tan(x).

i have trouble with viewing the page myself , but i guess it will work for you.

i cant read the math symbols but i was wondering if you got the same result with your continuum sum method ?

http://mathoverflow.net/questions/41011/...um-of-tanx

maybe this has been discussed on this forum before , sorry if i missed that.
Reply
#2
(11/16/2010, 11:44 PM)tommy1729 Wrote: hey mike

i wonder what you think about the below about summing tan(x).

i have trouble with viewing the page myself , but i guess it will work for you.

i cant read the math symbols but i was wondering if you got the same result with your continuum sum method ?

http://mathoverflow.net/questions/41011/...um-of-tanx

maybe this has been discussed on this forum before , sorry if i missed that.

Yes. This appears to be true. Consider the "pole expansion" of tan(x):

.

Now, consider the (bidirectional) continuum sum of the offset reciprocal function, which by numerical testing appears to be:



I am not sure of the formal proof yet, though I have some ideas as to how one might be achieved. The easiest method of proof appears to be to use indirect proof by first showing that a different continuum sum method yields this result, then proving the Fourier method is equivalent. The Fourier method is difficult to apply directly. I'll try to work out the details... but the tangent function can be summed now:

.

This is essentially the same formula as given on the site, though the terms may be arranged and factored differently. Numerically, this looks good with compatibility with the Fourier method if we use the periodic approximation of tangent periodic on the imaginary axis. Again, the rigorous proof will have to be indirect -- a direct attack on the proof (trying to directly sum periodic approximations of tangent in a non-real direction) looks to be quite difficult.

You may be wondering what the "" over the summation sign means. This has to do with a certain not-easily-addressed non-uniqueness issue in the continuum sum of analytic functions: namely, what happens in the presence of a singularity or singularities. The function has two equally valid, "natural" continuum sums that are not just constant shifts of each other, but 1-cyclic shifts (nontrivial or nonremovable shift, the kind that creates the whole general nonuniqueness problem in the first place). There is apparently no way to further "disambiguate" the continuum sum at this point. We must therefore make some kind of conventions. It appears that singularities in "canonical" continuum sums can only go off in directions parallel to the real axis (and I may have a proof -- all this stuff I've been collecting in a larger writeup about the continuum sum) -- which looks to have to do with the nature of the difference operator. The convention is based on a certain "intuition", namely that the sum is adding things up one at a time, starting from its lower index and going to the upper one. Thus, we don't expect to see singularities until the summation "passes over" them. That is what the notation means: we start "summing" out in both directions from the lower bound, until we hit a singularity, and then we take that singularity up for further "wallpapering" of the complex plane, but not before. That is, we take the solution in which its singularity lines are directed outward from, but do not cross, the vertical line passing through the lower summation bound. There are also the notations and , which mean that all singularity lines go in the given direction, regardless of whether or not they cross the summation line. The sums and do not really exist, because the singularities in both directions tiling the real line at irrational increments would result in a dense set of singularities on the real line. If we took Fourier expansions parallel to the real line but above/below it, I think either the sum does not converge or it has a natural boundary of analyticity (aka wall singularity) at the real line (though we could unite the two functions into a single one, but the N.B. would mean it could not be defined on the real line.). In general, , and only all agree and all exist when the function is entire and at least one exists, and only in those cases, can we unambiguously write .

And finally, here's a graph to show the shape of the continuum sum:

   
Reply




Users browsing this thread: 1 Guest(s)