Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
diagonal vs natural
#1
I just realized that it is quite easy in maple to compute the matrix power via diagonalization (the function is called "MatrixPower" and you can put float values as exponents), I just compare it with the natural tetration.

To get the dsexp (diagonalization super exponential) I compute the Carlemann matrix of then just take the -th matrix power via "MatrixPower" and get the value of row 1 and column 0, which is then so the diagonalization tetration is .

For the comparison I compute which is always a periodic function with period 1.

And this is the resulting for matrix size of dsexp and nslog being 9 and precision 90 digits:
   
I think even in this low precision its recognizable that they are not equal. However I am currently preparing a plot in doubled precision which though takes some time, so I will add the graph later to this post.

edit: and here it is now:
   
hm, the amplitude decreased a lot, so I am again unsure ...
Reply
#2
Here are two plots with a different precisions for the nslog, i.e. matrix size 50 and exact fractional arithmetic. The precisions of dsexp is equal to those in the previous post
   
   
As the peak is roughly around 0.3 but will probably move to 0.2 as the matrix size of the matrix power approaches the matrix size of the natural abel function and the computation is very time consuming I continue only computing for







The first two arguments of are the precision (number of digits) and the matrix size. Note, that the value degenerates if the precision is too little. Each arithmetic operation in the computation of the matrix power introduces a small error which sum up in the overall computation of the matrix power.

Actually I think we can see that decreases with increasing precision and matrix size, though very slowly but continuously. So it rather seems that the diagonalization method and Andrew's method are the same in the limit case!
Reply
#3
However here are disillusioning news.
nslog is computed with the same matrix size as dsexp and I pushed up to 35, computing with precision 400 up to and computing with precision 800 for . Till everything looks nice:
   
But after we see that the difference probably does not converge to 0:
   

edit: indeed this behaviour continues, with some effort (the used memory easily exceeds 2GB) I computed (precision 800) which seems quite close to the real limit.
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  Natural complex tetration program + video MorgothV8 1 1,987 04/27/2018, 07:54 PM
Last Post: MorgothV8
  regular vs intuitive (formerly: natural) bo198214 7 11,346 06/24/2010, 11:37 AM
Last Post: Gottfried
  diagonal vs regular bo198214 16 17,772 05/09/2008, 10:12 AM
Last Post: Gottfried



Users browsing this thread: 1 Guest(s)