Cauchy Integral Experiment

09/28/2009, 10:33 PM
Aagh... it looks like I've noticed additional problems now... geez...
For A = 10 and 209 nodes, and 32 normalized iterations followed by 6 unnormalized ones it does work good, giving residual at 0 of magnitude ~4.68 x 10^7. Yet if I bump the node count to 801 and A to 24 it gets wrong, residual plummets horrificially to magnitude 0.01 at 32 normalized iterations. 6 unnormalized ones just makes it worse, 0.02. The graphs attached to this show what happens after 0, 8, 16, 24, and 32 iterations (normalized) respectively. It's interesting. It's like "waves" are being emitted from the center and rippling on out toward the ends. No, seriously, flip between the pics. You can see the "waves" actually moving. So now I have a nice wave tank. Which I suppose is useful as a neat gimmick, but not as a thing for calculating Tetration Plotting code (v = node array): Code: psploth(n=1,801,[real(v[floor(n+0.5)]), imag(v[floor(n+0.5)])]) What is wrong? Code for the newest program: Code: \p 16;
09/30/2009, 06:18 AM
(09/28/2009, 10:33 PM)mike3 Wrote: ...It's like "waves" are being emitted from the center and rippling on out toward the ends...Mike, there is nothing wrong; you seem to do well. you can make a beautiful animation with these waves. I also had the similar waves in first versions of my code. In order to get convergence and evaluate the tetrational, update first the even nodes, going from the center to the ends, and then the odd nodes, going from the ends to the center. (In my case, the number of nodes is even, so, I can do both in the single loop, but it is analogy of that I suggest above.) This dumps the waves you mention. Please, plot the first few iterations, and post them.
10/01/2009, 08:57 AM
Well I improved the code with this technique:
Code: /* Do an iteration over a vector v representing values of tetration from iA to +iA */ I update the nodes starting from the center one then every other one out, the I go from the end nodes inward (or the next closest nodes to those end nodes if they were already done in the first pass). It seems to work good for node numbers like 211, 215, 219, etc. but not 209, 213, 217, etc. where it now diverges badly. Why is that?
10/01/2009, 01:42 PM
Mik, I looked at your code, and I do not understand how does it work.
1. If you have calculated the new value at some point z, you have no need to calculate the same for z^*. I have no compiler for the language you use, but I suspect, it has operation of complex conjugation, why you do not use it? 2. Will you post the picture you got with this code? It is easier to look than to read the code. 3. Quote:Well I improved the code with this techniqueYou do not know, do you improve or make worse, while you do not plot the residual.
Well, I say "improved" because when it does converge it will converge for very high number of nodes. Perhaps though that isn't accurate, because it now doesn't work for numbers of nodes like 409, 413, 417, etc. I just realized that 209 is not enough, the node amount needs to be higher than that.
For node amounts like 411, 415, 419, etc. even very high ones, the code converges to the smooth graph, so I do not see the need to bother with graphing that case. At 811 nodes with A = 24 it converges exquisitely with the residual after 32 normalized followed by 6 unnormalized iterations having a magnitude smaller than 10^9. If you want, the graphs for case 409 nodes and A = 14 are shown below, after 6 to 10 iterations. After the first 5 it seems to be settling, then it does, well, you can see. Finally, as for not using the conjugate, I do this because I'm thinking about attempting to use it on certain complexnumber bases eventually, which will not have the conjugate symmetry. So I'd like to see if it is possible to get it to work without that.
10/02/2009, 12:23 AM
(10/01/2009, 07:57 PM)mike3 Wrote: Well, I say "improved" because when it does converge it will converge for very high number of nodes.After some tens of iterations, do your curves become smooth? You may have a problem related with the different weight of nodes in the Simpson, the even nodes are twice "heavier" than the odd ones. The updating of each third node could boost the convergence, but the code becomes cumbersom. With the even amount of nodes I did not have such a problem, the weight of even nodes is similar to that of hte odd ones (except the tips, but there the weight of each node is small, so, it does no matter) and I could do it in a single loop. Quote:..At 811 nodes with A = 24 it converges exquisitely with the residual after 32 normalized followed by 6 unnormalized iterations having a magnitude smaller than 10^9.Congratulations! I see, improving the precision, you go by the "extensive" way: you increase the number of nodes instead of to change for some highorder approximation; each 3 addifional significant digits cost you an order of magnitude in the CPU time. If you plan to get a precision better than that with complex<double> (15 digits), try to finish your calculus in this century. Another note: do you still evaluate the residual in few single points? Could you plot the residual F(z)exp(F(z1)) in the complex zplane?
10/02/2009, 02:59 AM
Yes, it turns smooth if the number of nodes meets the requirement I mentioned, otherwise it fails as the posted graphs show.
It's interesting you mention about the weight, though I do not update nodes during the Simpson procedure, and why does it seems that odd node numbers that converge alternate with those that don't? As for using a higher order quadrature, yeah it would be better, but as you can see there are still a few issues to work out, then I'll switch to a more sophisticated method (cubic, maybe the GaussLegendre even). That's what I'm after here, trying to get out the bugs so they don't bite me later
10/02/2009, 07:12 AM
(10/02/2009, 02:59 AM)mike3 Wrote: Yes, it turns smooth if the number of nodes meets the requirement I mentioned, otherwise it fails as the posted graphs show.1. Can you explicitly formulate the requirement? What value of A and how many nodes should one use in order to get 3 decimal digits, to get 9 decimal digits, to get 12 decimal digits, etc? 2. Will you print the table of values of your approximation of tetration along the imaginary axis? Quote:It's interesting you mention about the weight, though I do not update nodes during the Simpson procedure, and why does it seems that odd node numbers that converge alternate with those that don't?This may be due to the jumping distribution of weights in the nodes of the Simpson formula. The Simpsons in the cartoon are better than in the precise evaluation of the tetrational. Quote:..are still a few issues to work out, then I'll switch to a more sophisticated method (cubic, maybe the GaussLegendre even).You already use the cubic one. You may take the algorithm for the GaussLegendre at http://en.citizendium.org/wiki/GauLegExample/code The routine for the evaluation takes only 11 lines at very beginning of the code; it is easy to translate to any language. Quote:That's what I'm after here, trying to get out the bugs so they don't bite me laterI do not understand how do you treat the conjugation, some bug may be there. if you do not use f(z^*)=f(z)^* , then the simultaneous update of F(z) and f(z^*) does not have any sense. 3. In order to catch bugs, calculate the residual at some sufficiently dense mesh and plot it. The mesh is really good tool to catch bugs.
The explicit requirement, to be "guaranteed" of convergence, appears to be that the node number be an odd number like 211, 215, etc. i.e. be odd and also for (N1)/2+1 to be even (or equivalently (N1)/2 to be odd). One may be able to get away with other numbers if they are sufficiently small (I think 209 works, but 409 does not). Not sure where it starts to fail at precisely.
I'm not sure how calculating the values on the imag axis for where it succeeds in converging is useful for debugging this, though. But if you really want it, here's what I get using 811 nodes (as this meets the given requirement) and 32 normalized followed by 6 unnormalized iterations with A = 24: Code: residual mag at 0: 0.0000000007856242620156700 And why does one need to bother with the residual, when one can plainly see the failure as the graph veers way off what it's supposed to be? Also, isn't Simpson's rule based on a quadratic, not a cubic? At least that's what my calc. text said. 
« Next Oldest  Next Newest »

Users browsing this thread: 1 Guest(s)