x↑↑x = -1 jaydfox Long Time Fellow Posts: 440 Threads: 31 Joined: Aug 2007 06/05/2014, 06:51 PM (06/05/2014, 01:53 PM)sheldonison Wrote: There are equations to show the mathematical equivalence of the theta mappings used in kneser.gp for real bases to Kneser's Riemann mapping, assuming convergence; I posted the relevant equations below. ... $\alpha(\text{tet}_b(z)) = z + \theta(z)\;\;\;\theta(z) = \sum_{n=0}^{\infty} a_n (\exp(2\pi i z))^n$ For real bases, the z+theta(z) mapping is mathematically equivalent to Kneser's Riemann mapping. Kneser's Riemann mapping function would be generated from the $a_n$ coefficients of the $\theta(z)$ mapping as follows. The Riemann mapping starts with, $z + \theta(z)\;$ which is modified too $\exp(2\pi i (z + \theta(z)))$ and then we map $z \mapsto \frac{\ln(z)}{2\pi i}$ $\text{RiemannMapping}(z)= \exp(2\pi i \times( \frac{\ln (z)}{2\pi i} + \sum_{n=0}^{\infty} a_n z^n))$ $\text{RiemannMapping}(z)= z \times \exp( \sum_{n=0}^{\infty} 2\pi i a_n z^n)\;\;$ which generates Kneser's Riemann mapping from Sheldon's theta(z) coefficientsI feel like Tommy sometimes (in a good way, I hope he takes this as a compliment). I feel like sometimes I can intuitively see what's going on, but I lack the rigorous mathematical tools to either "prove" what I believe intuitively, or to be able to put my intuition to work. (I see Tommy complaining a lot about people going back to Cauchy integrals, and I can understand his frustration. I'm still a newb with Cauchy integrals myself.) Intuitively, I have a good understanding of Kneser's method. But turning that intuition into calculations is the hard part. I don't know how to do Riemann mappings in general. I followed the steps outlined in the paper by Murid, Ali H. M. and Mohamed, Nurul Akmal (which I discussed in post #10 from this old thread). I tested it with basic functions at first, like x+x^2, just to make sure that I could make it work. Then I tested with functions with simple singularities on the boundary of a disc, like ln(x), or ln(ln(x)), or ln^[3](x), etc. Once I was sure that I had the right set of equations, I proceeded to analyze the Kneser construction. Even though the math worked, and even though I understood the Riemann mapping in principle, I didn't understand how/why the math worked. It's like knowing that A -> B -> C, and you understand A, and you can confirm C, but you don't understand B. That's where I'm at with the Kneser method. So I still have some basic complex analysis to study and digest and internalize, before I can really analyze your method. I'd love to be able to understand it, so I can compare it to the way I constructed it, and see what the differences are, and see if/how they converge on the same solution. Sadly, I've lost my Kneser construction code. Once I finish my slog library, that's my next big project, to rebuild the Kneser construction the way I did last time. ~ Jay Daniel Fox sheldonison Long Time Fellow Posts: 639 Threads: 22 Joined: Oct 2008 06/05/2014, 08:25 PM (This post was last modified: 06/05/2014, 08:32 PM by sheldonison.) (06/05/2014, 06:51 PM)jaydfox Wrote: ....Intuitively, I have a good understanding of Kneser's method. But turning that intuition into calculations is the hard part. I don't know how to do Riemann mappings in general. I followed the steps outlined in the paper by Murid, Ali H. M. and Mohamed, Nurul Akmal (which I discussed in post #10 from this old thread) ... Sadly, I've lost my Kneser construction code. Once I finish my slog library, that's my next big project, to rebuild the Kneser construction the way I did last time.Jay, I read and re-read those threads of yours at least 50 times I wanted sooo desperately to understand Kneser's construction. And this was after I had accidentally stumbled on my z+theta(z) approach, which I assumed was Kneser's construction, until it slowly dawned on me that it wasn't. I'm embarrassed to admit that it was many many months later before I finally understood Kneser's construction well enough to generate the mathematical equations relating it to the algorithm I used in the program I published here, that I blithely called "kneser.gp". Here's my opinion now. Kneser's construction is useful primarily because it gives a theoretical framework to prove the existence of the Tetration solution. Numerically, it can't possibly give results of any reasonable precision, due to the really nasty singularity for the Abel function of 0. I'm amazed you got Riemann mapping results as good as you did! But, the algorithm used in the Kneser.gp program is really useful since it is probably the fastest and best converging solution we have for tetration, and it has the added bonus of having a theoretical equivalence to Kneser's Riemann mapping. The only thing is, there is this small detail that its hard to prove the algorithm actually converges..... But assuming it converges, than it is equivalent to Kneser's Riemann mapping. One day, I need to actually publish some of this stuff in one of those Math journals or something. Thanks for your kind posts, both now, and for your incredibly well written posts in the past! - Sheldon jaydfox Long Time Fellow Posts: 440 Threads: 31 Joined: Aug 2007 06/05/2014, 10:26 PM (06/05/2014, 08:25 PM)sheldonison Wrote: Numerically, it can't possibly give results of any reasonable precision, due to the really nasty singularity for the Abel function of 0. Ha! Don't I know it! You can see the convergence analysis in my post. Using a 2835x2835 system, and using 11-point Newton-Cotes interpolation, I still barely managed to get convergence on the order of 10^-7, or about 20-25 bits of accuracy. For comparison, the intuitive solution with a matrix that large would already get about 30-50 bits of accuracy (depends on where you measure it), and my accelerated version tends to get twice the accuracy (in bits) of the intuitive solution, for a given system size. However, you mentioned that the singularity is "really nasty". I used to think so too, but when I was analyzing it last year, I discovered it's actually not that bad. I did some Riemann mapping tests (after rewriting my code), and I used iterated logarithms as a test subject. The Kneser singularities are basically an infinitely iterated logarithm. If you're curious about the method, I took a disk with a unit circumference, centered directly above an iterated logarithmic singularity. It's numerically unstable for a single logarthim, but even by the second iteration [i.e., ln(ln(x))], it starts to behave. So, start with a disk of radius 1/(2*pi), centered at I/(2*pi). Thus, as you go around the disk, at unit intervals, you hit an iterated logarithmic singularity at 0. I then used this as my "known solution", and used it to analyze the Riemann mapping. I would parametrize the circle (the boundary of the disk), using the "correct" parametrization to test precision. By "correct", I mean: $0 \leq \tau \lt 1\\ t(\tau) = \tau\\ z=\frac{(\exp(2 \pi i t(\tau))-1)}{2\pi i}$ Note that at t=0, we get z=0. At t=1/2, we get z= i/pi. At t=1, we are back to z=0. Once I had the equations working stably with the "correct" parametrization, I would change t(tau) to other 1-cyclic parametrizations (with or without continuous second or first derivatives). For example: $\begin{array}{lll} C^{\infty} & : & t(\tau) = \tau + \frac{\sin(2\pi\tau)}{100}\\ C^{2} & : & t_(\tau) = \tau + \tau^2 - 2\tau^3 + \tau^4\\ C^{1} & : & t_(\tau) = 1.5\tau - 1.5\tau^2 + \tau^3 \end{array}$ With these alternate parametrizations (simulating the unknown parametrization of tetration), I could test convergence to a known solution. Convergence was similar to the analysis in my old post (i.e., slow), but to my surprise (and delight!), convergence was greatly enhanced with Fejer/Clenshaw-Curtis/Gauss-Legendre quadrature. Also, discontinuous first and second derivatives didn't prevent convergence. Anyway, I'll have my new Kneser library finished and published in the forum, hopefully some time in the next few months. With code and more pretty pictures, hopefully this will be easier to understand. ~ Jay Daniel Fox tommy1729 Ultimate Fellow Posts: 1,370 Threads: 335 Joined: Feb 2009 06/05/2014, 10:29 PM (06/05/2014, 06:51 PM)jaydfox Wrote: I feel like Tommy sometimes (in a good way, I hope he takes this as a compliment). I feel like sometimes I can intuitively see what's going on, but I lack the rigorous mathematical tools to either "prove" what I believe intuitively, or to be able to put my intuition to work. (I see Tommy complaining a lot about people going back to Cauchy integrals, and I can understand his frustration. I'm still a newb with Cauchy integrals myself.) I guess the compliment is that I have " intuition " or something. So euh thanks or something. But what you say about Cauchy is - I think - actually the other way around. That is too say , people use Cauchy type ideas that are not as formal as they think it is. They lack proof rather than me. Just because some integrals are equal to 0 does not imply a function is analytic. In other words they turn the theorems " upside down " and other handwaving stuff. My opinion is subtle : 1) I believe in the uniqueness and existance 2) I do not believe in the computation , the method does not work 3) Im not sure completely sure about 2) or if this sexp is a new one. but I need evidence and claim there is none. The impression that I leave might be unbalanced in the sense that I post and tend not to delete stuff like half-baked ideas or even mistakes. I do not " polish " my math or ideas. I hand in the " rough draft " at the exam. I do not " believe in hiding and forgetting mistakes ". Quite the opposite actually. This is a forum and not a paper. If you know what I mean. Sometimes I wonder if teachers should point out mistakes they made themselves more often to students. On the other hand , I suspect they " warn for often made mistakes due to students " which were actually mistakes they (once) made themselves , but that sounds better. Although one does not exclude the other. Anyway I did not feel I could not " follow up " the Cauchy ideas , but rather my arguments against the method were not so well understood by the majority. Despite being skeptical , I still mention Cauchy's method because once again they did compute something ! And there has been no proof for either the optimists or pessimists. regards tommy1729 sheldonison Long Time Fellow Posts: 639 Threads: 22 Joined: Oct 2008 06/06/2014, 01:26 PM (This post was last modified: 06/06/2014, 02:59 PM by sheldonison.) (06/05/2014, 10:26 PM)jaydfox Wrote: (06/05/2014, 08:25 PM)sheldonison Wrote: Numerically, it can't possibly give results of any reasonable precision, due to the really nasty singularity for the Abel function of 0. Ha! Don't I know it! You can see the convergence analysis in my post. Using a 2835x2835 system, and using 11-point Newton-Cotes interpolation, I still barely managed to get convergence on the order of 10^-7, or about 20-25 bits of accuracy. For comparison, the intuitive solution with a matrix that large would already get about 30-50 bits of accuracy (depends on where you measure it), and my accelerated version tends to get twice the accuracy (in bits) of the intuitive solution, for a given system size. However, you mentioned that the singularity is "really nasty". I used to think so too, but when I was analyzing it last year, I discovered it's actually not that bad. I did some Riemann mapping tests (after rewriting my code), and I used iterated logarithms as a test subject. The Kneser singularities are basically an infinitely iterated logarithm. I have no idea how you got results that are that good, for a Kneser Riemann mapping. I just generated a 2000 term Taylor series for the Riemann Circle mapping, from the theta(z) mapping, using the equations I posted. If $\Im(z)>0.002$, then a 2000 term series is sufficient for approximately 10^-15 decimal digits accuracy for the end result sexp(z), and I verified that. But for a 2000 term series, at the real axis, I get an error term of a little bit bigger than 10^-4 when compared to sexp(z). That's nowhere near 10^-7. Anyway, here are my error terms, followed by the first 60 terms of the Riemann mapping Taylor series. If Jay gets his code working, perhaps he can post his series. Code:error term at 0.002*I 1.48539974604412 E-15 -0.500000000000000 + 0.00200000000000000*I 1.47489069472385 E-15 -0.450000000000000 + 0.00200000000000000*I 1.46121503057057 E-15 -0.400000000000000 + 0.00200000000000000*I 1.44395154499657 E-15 -0.350000000000000 + 0.00200000000000000*I 1.42240825152162 E-15 -0.300000000000000 + 0.00200000000000000*I 1.39551703699474 E-15 -0.250000000000000 + 0.00200000000000000*I 1.36161960857917 E-15 -0.200000000000000 + 0.00200000000000000*I 1.31799885508938 E-15 -0.150000000000000 + 0.00200000000000000*I 1.25966680569473 E-15 -0.100000000000000 + 0.00200000000000000*I 1.17508969251408 E-15 -0.0500000000000000 + 0.00200000000000000*I 1.05484803687254 E-15 0.E-67 + 0.00200000000000000*I 1.42448041167858 E-15 0.0500000000000000 + 0.00200000000000000*I 1.57521942898691 E-15 0.100000000000000 + 0.00200000000000000*I 1.70167669559400 E-15 0.150000000000000 + 0.00200000000000000*I 1.81712691376729 E-15 0.200000000000000 + 0.00200000000000000*I 1.92663904167934 E-15 0.250000000000000 + 0.00200000000000000*I 2.03279073901222 E-15 0.300000000000000 + 0.00200000000000000*I 2.13706715820979 E-15 0.350000000000000 + 0.00200000000000000*I 2.24034990997028 E-15 0.400000000000000 + 0.00200000000000000*I 2.34311301352602 E-15 0.450000000000000 + 0.00200000000000000*I 2.44549405599774 E-15 0.500000000000000 + 0.00200000000000000*I error term at the real axis 0.000122279589843505 -0.500000000000000 + 0.E-67*I 0.000121410233128676 -0.450000000000000 + 0.E-67*I 0.000120278572589665 -0.400000000000000 + 0.E-67*I 0.000118849428912911 -0.350000000000000 + 0.E-67*I 0.000117064967417645 -0.300000000000000 + 0.E-67*I 0.000114835709026505 -0.250000000000000 + 0.E-67*I 0.000112022145936091 -0.200000000000000 + 0.E-67*I 0.000108393990959996 -0.150000000000000 + 0.E-67*I 0.000103522306328356 -0.100000000000000 + 0.E-67*I 0.0000963767408324917 -0.0500000000000000 + 0.E-67*I 0.0000486991561415855 0.E-67 + 0.E-67*I 0.000117229100049087 0.0500000000000000 + 0.E-67*I 0.000129647505486836 0.100000000000000 + 0.E-67*I 0.000140069525671477 0.150000000000000 + 0.E-67*I 0.000149581759266669 0.200000000000000 + 0.E-67*I 0.000158602201407647 0.250000000000000 + 0.E-67*I 0.000167343698487192 0.300000000000000 + 0.E-67*I 0.000175928857373946 0.350000000000000 + 0.E-67*I 0.000184430363192777 0.400000000000000 + 0.E-67*I 0.000192887150257634 0.450000000000000 + 0.E-67*I 0.000201310257945835 0.500000000000000 + 0.E-67*I Here is the Riemann mapping series {RiemannMap= x^ 1* (-327.7900737499296362245223 - 23.90211663662640951715732*I) +x^ 2* (-185.9456815203626111385993 - 21.53372742928080246263683*I) +x^ 3* (-130.5021375740268760564963 - 18.40398494742714600499322*I) +x^ 4* (-100.7554518126267253224986 - 16.01165962890605241739091*I) +x^ 5* (-82.14559469605831107094789 - 14.19251002958850769839935*I) +x^ 6* (-69.38293686758304392583564 - 12.77142709975804454542933*I) +x^ 7* (-60.07614697028410621039645 - 11.63101188033707572674404*I) +x^ 8* (-52.98403795063564504697256 - 10.69456145922465454160790*I) +x^ 9* (-47.39745732289376085443843 - 9.910756323465195981465028*I) +x^10* (-42.88143943638113746945703 - 9.244178237858093337857055*I) +x^11* (-39.15419551272454233078916 - 8.669646366305387333487256*I) +x^12* (-36.02505166964368981700397 - 8.168776352931145519089286*I) +x^13* (-33.36032741603822724385750 - 7.727829338414107315386952*I) +x^14* (-31.06349373678005276058250 - 7.336326554807253921625497*I) +x^15* (-29.06308871264384926429700 - 6.986131482096766468044178*I) +x^16* (-27.30506627723899429124894 - 6.670825910226712052166535*I) +x^17* (-25.74778878750429139698420 - 6.385275832487105490648721*I) +x^18* (-24.35865491006778095837144 - 6.125323046096789509916112*I) +x^19* (-23.11177149208846981595736 - 5.887561907396471181236202*I) +x^20* (-21.98631058341126369713516 - 5.669174971195849915165872*I) +x^21* (-20.96532720981577588630165 - 5.467810116389373439472789*I) +x^22* (-20.03489378164942227447907 - 5.281487401183351085916574*I) +x^23* (-19.18345636223462436180240 - 5.108527555489849135347319*I) +x^24* (-18.40134912867140525038615 - 4.947496445373594090916479*I) +x^25* (-17.68042342683887284539206 - 4.797161481768521336801208*I) +x^26* (-17.01376104147164494207010 - 4.656457068689557924111830*I) +x^27* (-16.39545017500698181535013 - 4.524456968352042340209790*I) +x^28* (-15.82040868780648458805033 - 4.400352013216143946839389*I) +x^29* (-15.28424335548893485935653 - 4.283431990561650601367138*I) +x^30* (-14.78313685745141748617509 - 4.173070811871996871871108*I) +x^31* (-14.31375632090695905289909 - 4.068714289432567183874581*I) +x^32* (-13.87317876882008282248580 - 3.969869998212339262685917*I) +x^33* (-13.45882993354894505984508 - 3.876098817567044447093479*I) +x^34* (-13.06843372019828813662486 - 3.787007835259772100614620*I) +x^35* (-12.69997021692735288669247 - 3.702244363303377394209595*I) +x^36* (-12.35164061114453141461855 - 3.621490866598529579255885*I) +x^37* (-12.02183772117189722174859 - 3.544460645181984030579682*I) +x^38* (-11.70912112148618838671693 - 3.470894141964731753824588*I) +x^39* (-11.41219604687566690125555 - 3.400555772229562073761104*I) +x^40* (-11.12989542195876179277337 - 3.333231190431880029342656*I) +x^41* (-10.86116448862628774538233 - 3.268724925173703485610012*I) +x^42* (-10.60504760334181465513582 - 3.206858325479057669041820*I) +x^43* (-10.36067685502226449503764 - 3.147467771358334465163649*I) +x^44* (-10.12726221705588583477331 - 3.090403109621203160400691*I) +x^45* (-9.904082997408298909140751 - 3.035526282376254789350546*I) +x^46* (-9.690480391398340144605118 - 2.982710120945907913648327*I) +x^47* (-9.485850974651652189477772 - 2.931837282265036591517706*I) +x^48* (-9.289641000551706203919960 - 2.882799308407712700307342*I) +x^49* (-9.101341388441679990739435 - 2.835495792845282582443991*I) +x^50* (-8.920483306853164130678297 - 2.789833639497083468923510*I) +x^51* (-8.746634270909107793359390 - 2.745726402685078925015440*I) +x^52* (-8.579394685369404944291408 - 2.703093697819645491229504*I) +x^53* (-8.418394775035100065267641 - 2.661860674085207203683352*I) +x^54* (-8.263291852782031300976388 - 2.621957541609435214461432*I) +x^55* (-8.113767882661989264022255 - 2.583319146627242910994307*I) +x^56* (-7.969527301534856323984202 - 2.545884589022495921095650*I) +x^57* (-7.830295067777372297714664 - 2.509596877372102599887321*I) +x^58* (-7.695814909914436622394283 - 2.474402617250175412680592*I) +x^59* (-7.565847751668658066339278 - 2.440251729091720368616231*I) +x^60* (-7.440170293030741768660035 - 2.407097192380241168765835*I) } skipping to the 2000th term: x^2000*(-0.2110504350274731041729573 - 0.1103488141396473954206754*I) I went back and generated a 4000 term series too: x^4000*(-0.1033386085272496424048194 - 0.05795348899039744714402679*I)I would estimate that each doubling of the number of terms would give you one binary bit additional accuracy. So if a 2000 term series is 10^-4, then a 2,000,000 term series might be 10^-7. Interestingly, you can calculate sexp(z) along a unit circle in the upper half of the complex plane very accurately with this series. If $\Im(z)>0.004i$, this series is accurate to 25 decimal digits; and if you use it along with the conj(f(conj(z))) in the lower half of the complex plane, you can recover an sexp(z) unit circle Taylor series accurate to 25 decimal digits.... - Sheldon jaydfox Long Time Fellow Posts: 440 Threads: 31 Joined: Aug 2007 06/06/2014, 06:17 PM (06/06/2014, 01:26 PM)sheldonison Wrote: (06/05/2014, 10:26 PM)jaydfox Wrote: (06/05/2014, 08:25 PM)sheldonison Wrote: Numerically, it can't possibly give results of any reasonable precision, due to the really nasty singularity for the Abel function of 0. Ha! Don't I know it! You can see the convergence analysis in my post. Using a 2835x2835 system, and using 11-point Newton-Cotes interpolation, I still barely managed to get convergence on the order of 10^-7, or about 20-25 bits of accuracy. I have no idea how you got results that are that good, for a Kneser Riemann mapping. I just generated a 2000 term Taylor series for the Riemann Circle mapping, from the theta(z) mapping, using the equations I posted. If $\Im(z)>0.002$, then a 2000 term series is sufficient for approximately 10^-15 decimal digits accuracy for the end result sexp(z), and I verified that. But for a 2000 term series, at the real axis, I get an error term of a little bit bigger than 10^-4 when compared to sexp(z). That's nowhere near 10^-7. Bear in mind, we're calculating the mapping in a different way. They should be equivalent in the infinite limit, but they will be different for finite truncations. I'm not sure why mine would have converged faster though, since they should be equivalent at a first order approximation. Quote:I would estimate that each doubling of the number of terms would give you one binary bit additional accuracy. If you look back at my original analysis, I stated "convergence is slow, yielding just under 3 bits of precision for each tripling of system size. This more or less corresponds to 2 bits per doubling". I'll try to wrap up my slog library in the next couple weeks so I can go back to working on my Kneser code. I have a partial Kneser library that I worked on last year. I hadn't gotten as far as testing it with the Kneser construction. The last time I worked on it, I was still testing the Riemann mapping code with the iterated logarithm, which I mentioned in post #33 of this thread. ~ Jay Daniel Fox « Next Oldest | Next Newest »