Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Funny pictures of sexps
#41
(08/27/2009, 02:51 PM)bo198214 Wrote: I dont think one can not understand this without having taken a proper look at the pictures.
I'm working on pictures of the isexp at the moment, will get back to this maybe early next week at the earliest.

Quote:Fortunately what you (also previously) describe does not only apply to the intuitive slog but I remember the pictures that Dmitrii made about his Cauchy slog.
(They are scattered over the forum. Particularly this image depicts I think L2, as the left border. It depicts cslog(G) where G is the region bounded by L1 and exp(L1), though L1 extends here also to the lower fixed point. This region is particularly also used by Kneser in his construction which though is not of computational nature.)
I'm not particularly familiar with Dmitrii's "Cauchy slog", though looking at the images I'm not really seeing a noticeable difference between that and the intuitive slog.

As for Kneser's construction, I gave an initial look and was quickly overwhelmed; it will take me some time to properly decipher it, so I can't really comment on it yet.

Quote:So remembering these pictures I can follow you at least a bit.
If I believe what you write I am asking myself where the specific *intuitive* slog is relevant. I guess you could do that construction with every slog and always get as a limit the regular slog. Because you would continue every slog that is defined on G to the whole plane except to the primary fixed points and cuts and somehow it seems as these coarse properties suffice for your construction instead of it relying on the fine grained structure directly on G.
Well, here's where I think that the intuitive slog must be uniquely defensible: as I've mentioned before, any cyclic shift from the islog on the real axis (which would be necessary if two slogs are not the same) would necessarily destroy the exact approximation of the regular slog at either the upper or lower fixed points, if not both.

Thus, there is no other slog which properly approximates the regular slogs at both the upper and lower fixed points, and which takes on real values without singularities on its principal branch. This can be seen by a simple thought experiment about a Fourier series representing the cyclic wobble function: necessarily it cannot be smooth as we travel infinitely far in both imaginary directions. Perhaps in one imaginary direction, but not the other.

As an example, consider . Notice that admits a proper Fourier series representation for our purposes: a function that is cyclic on the real axis, being equal to 0 at the integers.

This wobble function will be very small on the real axis, though it will admittedly take on non-real values. But it's small enough that perhaps we don't at first notice it as we work numerically.

As we move towards the upper fixed, the islog will go towards , and the thus the qslog will differ from the islog by smaller and smaller amounts, such that the regular slog is properly approximated at the upper fixed point.

However, as we move towards the lower fixed point, there comes a point where the cyclic shift becomes unstable, and the regular slog is not well approximated for the lower fixed point, even despite having not introduced new singularities.

Note that in order to pick a cyclic function which does not allow complex shifts for the real axis, it must be symmetric, and now the regular slog behavior is destroyed at both fixed points.

To my knowledge, one cannot construct a Fourier series that is smooth in both imaginary directions. Only one direction can be smooth, and only if one allows complex results for real inputs. Otherwise, neither direction is smooth.

If you know otherwise, then my argument would seem to fall apart.


Edit: Replaced with , to make clear my point about the periodicity.
~ Jay Daniel Fox
Reply
#42
(08/27/2009, 04:38 PM)jaydfox Wrote: To my knowledge, one cannot construct a Fourier series that is smooth in both imaginary directions. Only one direction can be smooth, and only if one allows complex results for real inputs. Otherwise, neither direction is smooth.

If you know otherwise, then my argument would seem to fall apart.
Actually, I take that back. One could construct an infinite sequence of polar singularities at the integers. For example, . As we move infinitely far from the real axis, the effects of these singularities diminishes rapidly enough that, despite the cumulative effects of an infinite number of them, we should get 0 as we move infinitely far from the real axis. And being cyclic with a period of 1 on the real axis, this function would probably admit a Fourier series expansion.

However, this method introduces singularities on the real axis, so I stand by my original assertion (which included a proscription against introducing new singularities in the principal branch). You can't create a non-constant function which goes to a constant value at infinity without creating a singularity somewhere... Can you? (Not being sarcastic, this is an honest question. As far as I can remember, this is true, but I welcome being shown that I'm mistaken on this point.)
~ Jay Daniel Fox
Reply
#43
(08/27/2009, 04:38 PM)jaydfox Wrote: I'm not particularly familiar with Dmitrii's "Cauchy slog", though looking at the images I'm not really seeing a noticeable difference between that and the intuitive slog.
Thatswhy I am pointing it out. You also mentioned these singularities hidden under the principal branch. Dmitrii describes a completely similar structure for his cslog here:
http://www.ils.uec.ac.jp/~dima/PAPERS/2009fractal.pdf
page 10, figure 4.

Quote:As for Kneser's construction, I gave an initial look and was quickly overwhelmed; it will take me some time to properly decipher it, so I can't really comment on it yet.
Did you read the original article? You know I summarized his article here. Kneser exactly showed the construction of a real-valued superlogarithm from the regular one. The regular slog maps the upper halfplane to some infinite region D (because it has singularities on the real axis at ). Now from the Riemann mapping theorem for any two regions (except the whole plane) there is a biholomorphic mapping between them. Kneser uses this biholomorphic mapping to map D back to the upper halfplane. And shows that , hence also satisfies the Abel equation .

kslog maps G (the region bounded by L1 and exp(L1)) biholomorphically to an imaginary unbounded region. I showed that this condition (about which we would agree that the islog also satisfies it) is a uniqueness criterion here. So why am I arguing against its uniqueness? Because I finally accept statements only with proof!
Reply
#44
(08/27/2009, 04:58 PM)jaydfox Wrote: You can't create a non-constant function which goes to a constant value at infinity without creating a singularity somewhere... Can you? (Not being sarcastic, this is an honest question. As far as I can remember, this is true, but I welcome being shown that I'm mistaken on this point.)

I cant, because of Liouville's theorem: Every bounded entire function is constant.
Reply
#45
(08/27/2009, 05:36 PM)bo198214 Wrote:
(08/27/2009, 04:38 PM)jaydfox Wrote: As for Kneser's construction, I gave an initial look and was quickly overwhelmed; it will take me some time to properly decipher it, so I can't really comment on it yet.
Did you read the original article? You know I summarized his article here. Kneser exactly showed the construction of a real-valued superlogarithm from the regular one. The regular slog maps the upper halfplane to some infinite region D (because it has singularities on the real axis at ). Now from the Riemann mapping theorem for any two regions (except the whole plane) there is a biholomorphic mapping between them. Kneser uses this biholomorphic mapping to map D back to the upper halfplane. And shows that , hence also satisfies the Abel equation .

kslog maps G (the region bounded by L1 and exp(L1)) biholomorphically to an imaginary unbounded region. I showed that this condition (about which we would agree that the islog also satisfies it) is a uniqueness criterion here. So why am I arguing against its uniqueness? Because I finally accept statements only with proof!
I haven't read the original (I think I've maybe skimmed over it, but my German isn't fluent). I'm still going through your description, and while I seem to have a rudimentary understanding of each step in the construction, I apparently don't understand them well enough to string the steps together mentally and have one of those "Ah-ha!" moments. I'm close, but I need to read through it again and work some of the steps out myself to get a better feel for them.

But it is interesting, as I'm convinced of the uniqueness of the islog, and given that Kneser's construction seems to fit as well, I'm assuming they are numerically equivalent. I'm assuming it maintains the approximation of the regular slog near the fixed point? This is key: if it does, then it has the properties of the islog, and by the uniqueness criterion it should match the islog. And if so, I think I rather prefer this approach, as to me it's more intuitive (ironically!) than the intuitive slog (or rather, than the matrix solution method, corresponding to solving an infinite system of linear equations). If, that is, I'm understanding it correctly, which as I admit, I might not be.

You mentioned somewhere having a means to compute values for Kneser's slog: perhaps we should see how well they match up with the islog? That, or derive a power series expansion for Kneser's solution and see how it matches the islog.

And you mention the uniquess criterion that you outlined in that paper: are you saying then that you don't consider this criterion proven?
~ Jay Daniel Fox
Reply
#46
(08/27/2009, 06:21 PM)jaydfox Wrote: [quote='bo198214' pid='3861' dateline='1251390997']
I apparently don't understand them well enough to string the steps together mentally and have one of those "Ah-ha!" moments.
Well it took quite some while until I got what he is talking about. Though I hoped my description would make it easier, perhaps however its just a write-up for my own understanding. But the base idea is really: we have already some cool super-logarithm (the rslog) which however is not real on the real line. So we make it real on the real on the real line with help of the Riemann mapping theorem (which he does not mentioned once in his article, he just assumed that everyone of course knows what he means!).
The real kslog should map the upper half plane H again to H. Thatswhy we put .

Quote:I'm assuming it maintains the approximation of the regular slog near the fixed point?
Unfortunately I dont know. I have to better understand how this approximation looks. Perhaps I can then derive that it is true for Kneser's slog as well.

Quote:You mentioned somewhere having a means to compute values for Kneser's slog: perhaps we should see how well they match up with the islog?
No. The Riemann mapping theorem is hard computationally, i.e. for two given regions (how are they given?) to compute the conformal map between them. I gave it once a vague try (indeed there is also a constructive/approximative way to compute those conformal maps, also given in "Henrici", but I think they are way beyond any current computer performance for our problem yet)

However Kneser also suggests a certain series expansion (with fractional exponents) at the fixed point. But I didnt find a way to compute the coefficients yet.

Quote:And you mention the uniquess criterion that you outlined in that paper: are you saying then that you don't consider this criterion proven?
I gave a proof, verify yourself! But honestly there are so many cases where I found errors in my derivations that I want to wait for the reviewers opinions, before I believe it myself. Though I have a good feeling about it Smile
Reply




Users browsing this thread: 1 Guest(s)