# Tetration Forum

Full Version: diagonal vs natural
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I just realized that it is quite easy in maple to compute the matrix power via diagonalization (the function is called "MatrixPower" and you can put float values as exponents), I just compare it with the natural tetration.

To get the dsexp (diagonalization super exponential) I compute the Carlemann matrix $E$ of $e^x$ then just take the $t$-th matrix power $E^t$ via "MatrixPower" and get the value of row 1 and column 0, which is then $\exp^{\circ t}(0)$ so the diagonalization tetration is $e[4]t=\text{dsexp}(t)=\exp^{\circ t+1}(0)=\exp^{\circ t}(1)$.

For the comparison I compute $\delta(t)=\text{dsexp}(\text{nslog}(t))-t$ which is always a periodic function with period 1.

And this is the resulting $\delta(t)$ for matrix size of dsexp and nslog being 9 and precision 90 digits:
[attachment=324]
I think even in this low precision its recognizable that they are not equal. However I am currently preparing a plot in doubled precision which though takes some time, so I will add the graph later to this post.

edit: and here it is now:
[attachment=325]
hm, the amplitude decreased a lot, so I am again unsure ...
Here are two plots with a different precisions for the nslog, i.e. matrix size 50 and exact fractional arithmetic. The precisions of dsexp is equal to those in the previous post
[attachment=326]
[attachment=327]
As the peak is roughly around 0.3 but will probably move to 0.2 as the matrix size of the matrix power approaches the matrix size of the natural abel function and the computation is very time consuming I continue only computing for $t=0.2,0.3$
$\delta(190,19)(0.3)\approx 0.53\times 10^{-5}$
$\delta(200,20)(0.3)\approx 0.43\times 10^{-5}$
$\delta(250,25)(0.3)\approx 0.15\times 10^{-5}$
$\delta(250,25)(0.2)\approx 0.16\times 10^{-5}$
$\delta(300,25)(0.2)\approx 0.16\times 10^{-5}$
$\delta(350,30)(0.2)\approx 0.65\times 10^{-6}$

The first two arguments of $\delta$ are the precision (number of digits) and the matrix size. Note, that the value degenerates if the precision is too little. Each arithmetic operation in the computation of the matrix power introduces a small error which sum up in the overall computation of the matrix power.

Actually I think we can see that $\delta$ decreases with increasing precision and matrix size, though very slowly but continuously. So it rather seems that the diagonalization method and Andrew's method are the same in the limit case!
However here are disillusioning news.
nslog is computed with the same matrix size $n$ as dsexp and I pushed $n$ up to 35, computing $\delta(0.2)$ with precision 400 up to $n=33$ and computing $\delta(0.2)$ with precision 800 for $n\ge 34$. Till $n=20$ everything looks nice:
[attachment=335]
But after $n=20$ we see that the difference probably does not converge to 0:
[attachment=336]

edit: indeed this behaviour continues, with some effort (the used memory easily exceeds 2GB) I computed $n=45$ (precision 800) $\delta_{800,45}(0.2)\approx -2.11 \times 10^{-7}$ which seems quite close to the real limit.