Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Attempting to compute the kslog numerically (i.e., Kneser's construction)
#11
(09/25/2009, 03:24 PM)jaydfox Wrote: Armed with the referenced paper, I began experimenting with the first phase, which involves creating a parameterized closed curve around a point of interest, and then solving an integration problem numerically, via matrix methods. I initially performed the test with functions I could compute exactly, in order to verify the results against a theoretically exact answer. The method is quite stable: indeed, in my tests, I can solve a 400x400 matrix with merely 53-bit (double) precision, and retain 46 bits (measured versus a solution using 1024 bits). That means I lose only 7 bits during the matrix solution!
I should point out an interesting quirk with this method. The authors of the paper describe a fairly rapid rate of convergence as they increased system size. However, based on numerous tests I performed, it seems that the convergence can only be expected to be so rapid when two conditions are met:
1) The first and second derivatives of the parameterized boundary function must be fairly "tame", by which I mean that they do not grow too large compared to their average values. In particular, singularities seem to prevent rapid convergence. This condition is of course not met with the kslog, but was easily met by the epitrochoid function used by the authors.
2) The parameterization must be the "correct" one, such that the mapping to the unit disk yields a boundary with constant magnitude derivatives (moving around the disk at a constant rate). Any cyclic deviation from the "correct" parameterization results in poor convergence, though of course the smaller the deviation, the smaller the effect. So the better the initial "prior" parameterization, the better! As far as I could tell, the authors used the "correct" parameterization, but they weren't very explicit on the point.

For the kslog, condition 2 could be met by parameterizing with ksexp function, assuming we knew it ahead of time. I was tempted to use the islog, but I settled for testing a few different "naive" approaches. The most simple is to parameterize the (0,1) interval linearly. Surprisingly, the results with the linear parameterization were only slightly worse than those using pretty decent approximations. I think this is because the effect of the singularities is the overriding hindrance to convergence, so other sources of inaccuracy are rather muted.

However, the mapping integral relies on the first and second derivatives, and the linear parameterization does not have a continuous second derivative. So I decided in the end to use the aforementioned cubic polynomial parameterization, which yields a continuous second derivative for the sexp "prior".
~ Jay Daniel Fox
Reply
#12
Jay, you once said
(08/27/2009, 06:21 PM)jaydfox Wrote: I apparently don't understand them well enough to string the steps together mentally and have one of those "Ah-ha!" moments. I'm close, but I need to read through it again and work some of the steps out myself to get a better feel for them.

I am curious whether you got those Ah-ha moments later, or whether it was rather a continuous increase of understanding. I guess there are not many people on the forum who got acquainted with Kneser's construction. Not to speak of computing the kslog.
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  fast accurate Kneser sexp algorithm sheldonison 38 68,711 01/14/2016, 05:05 AM
Last Post: sheldonison
  "Kneser"/Riemann mapping method code for *complex* bases mike3 2 6,260 08/15/2011, 03:14 PM
Last Post: Gottfried
  Attempt to make own implementation of "Kneser" algorithm: trouble mike3 9 14,547 06/16/2011, 11:48 AM
Last Post: mike3
  An incremental method to compute (Abel) matrix inverses bo198214 3 8,685 07/20/2010, 12:13 PM
Last Post: Gottfried



Users browsing this thread: 1 Guest(s)