# Tetration Forum

Full Version: Experimental tetration 2D plot
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hello,

I made an experimental plot of tetration of x^^y, using complex color wheel (hue) coloring to indicate phase and magnitude. The upper right quadrant is for positive x and y, so it gets whiter and whiter there because tetration of positive numbers gets larger and larger.

In the negative parts, pure experimental chaos, probably very wrong. I'm posting it because I don't think I've seen it rendered this way before, and it might be interesting to discuss what could be correct, and what could be totally wrong.

I implemented what is called "Linear approximation for the extension to real heights" on Wikipedia, so it is of course not true tetration. Also, there may be programming and numerical errors. The source code is in JavaScript and can be found in the following file under "Jmat.Real.Tetration" and "Jmat.Complex.Tetration": https://github.com/lvandeve/jmat/blob/master/jmat.js So the coloring works as follows:
-white = very high or infinity
-black = zero
-red = positive real number (darker = lower, brighter = higher)
-cyan = negative real number (darker = closer to zero, brighter = more negative)
-all other colors than cyan or red are complex or imaginary numbers, again, brighter is higher magnitude
-grey = NaN (not a number, calculation error...)

And then the features...

So in the upper right it's obvious: it gets whiter and whiter because it's just so extremely high valued there. E.g. the most upperright pixel is 10^^10, it displays simply as white because it's so huge.

The top left is a very strange chaos. This is where x is negative, but y is positive, in x ^^ y.

Then the whole bottom has those white stripes of infinity, especially the slanted ones between 0 and 1 are weird. These may be a programming or numerical errors: given the colors around it, it seems as if in the limit from that region they would have a similar finite color.

Does anyone recognise anything? Are there other plots like this, did anyone make a plot of x^^y? I'd love to see them. Do you think the plot is just totally wrong?

Thanks.
How do I open "jmat.js"? Tried to open in JavaScript but I got error: syntax error (line 6; car 1) with code "800A03EA".

Can you calculate (10^10)^^0.5?
Hi,

You need to include the jmat.js file in an HTML file. Then you can use the javascript console to calculate things. I'm quite sure every browser has a JS console, but I can only describe how to use it in Chrome: load that HTML file with jmat.js in it, then press F12, and type the following to calculate (10^10)^^0.5:

Jmat.tetration(Math.pow(10, 10), 0.5).toString()

100000

Does that seem plausible? The function has problems with all kinds of extreme values. The plot is in the region of -10 to 10 for x and y. Please do not use this for numerically very precise answers.

I was mostly interested in comments about the plot however. Also, if you have software to calculate an approximation of tetration of real x and real y, could you please make such a plot as well? It's a "complex domain plot" using hue to represent phase (Wikipedia has lots of this type of plot).

Thanks a lot!
(06/13/2014, 03:42 PM)Lode Wrote: [ -> ]Hi,

You need to include the jmat.js file in an HTML file. Then you can use the javascript console to calculate things. I'm quite sure every browser has a JS console, but I can only describe how to use it in Chrome: load that HTML file with jmat.js in it, then press F12, and type the following to calculate (10^10)^^0.5:

Jmat.tetration(Math.pow(10, 10), 0.5).toString()

100000
Ok. I have loaded "jmat.js" in Chrome, pressed F12, clicked "Console" tab,...
I tried to run "Jmat.tetration(Math.pow(10, 10), 0.5).toString()" but I got error: ReferenceError: Jmat is not defined

Seems 10,000 be not correct but yes I think ~41.

10^^0.5 ~= 2.4770 ("Kneser" code in Pari/GP). I would love to see the plot by using a higher polynomial approximation of tetration.

I expect to see fractal patterns.

regards

tommy1729
(06/13/2014, 10:50 PM)tommy1729 Wrote: [ -> ]I would love to see the plot by using a higher polynomial approximation of tetration.

I expect to see fractal patterns.

regards

tommy1729
Has anyone worked with the Schroeder equation for negative bases, or bases<1? The kneser program works for real bases > exp(1/e). So that part of the graph x^^y, for x>exp(1/e) wouldn't be too difficult.

For 1<b<eta, one could use the inverse Abel function solution from the Schroeder function. There's another singularity, at b=1. I have only worked with one base<1, exp(-e), which is a really really hard case, since the b=exp(-e) has a pseudo period=2.
(06/12/2014, 12:04 PM)nuninho1980 Wrote: [ -> ]How do I open "jmat.js"? Tried to open in JavaScript but I got error: syntax error (line 6; car 1) with code "800A03EA".

Can you calculate (10^10)^^0.5?

With a very rough approximation I get 44.05666 (using Eigendecomposition of truncated matrix 32x32, Pari/GP, 2500 digits internal precision required for this).

If I repeat that I arrive at something like (10^10)*1.004, so the value should be slightly too high, because of course I should arrive at (10^10).
(Note that this method possibly approximates the Kneser solution when the matrix-size is increased without bound)

Gottfried
(08/04/2014, 11:10 PM)Gottfried Wrote: [ -> ]With a very rough approximation I get 44.05666 (using Eigendecomposition of truncated matrix 32x32, Pari/GP, 2500 digits internal precision required for this).

If I repeat that I arrive at something like (10^10)*1.004, so the value should be slightly too high, because of course I should arrive at (10^10).
(Note that this method possibly approximates the Kneser solution when the matrix-size is increased without bound)

Gottfried
I'm interested. You can get ~44... Using Kneser is serious!? (08/05/2014, 12:35 PM)nuninho1980 Wrote: [ -> ]
(08/04/2014, 11:10 PM)Gottfried Wrote: [ -> ]With a very rough approximation I get 44.05666 (using Eigendecomposition of truncated matrix 32x32, Pari/GP, 2500 digits internal precision required for this).

If I repeat that I arrive at something like (10^10)*1.004, so the value should be slightly too high, because of course I should arrive at (10^10).
(Note that this method possibly approximates the Kneser solution when the matrix-size is increased without bound)

Gottfried
I'm interested. You can get ~44... Using Kneser is serious!? :-) Well, I said the technique likely approximates Kneser if the matrix size goes to infinity. However, 32x32 requires 2500 dec digits internal precision to compute the eigen-decomposition - to arrive at about 6 to 8 digits accuracy for the result... Thus I said, this is a fairly rough method.

The procedere is simple: let b be the basis so b=10^10, the log of b bl=10*log(10).
Then define the coefficients for the exponential-series with basis b, which is bseries= sum(k=0,31, bl^k / k! *x^k ) + O(x^32)
Then having the (truncated) exponential-series you can use its coefficients and that of the powers of bseries to form the Carleman-matrix C (see Wikipedia for this or my early postings here, where I did not know the name "Carleman-matrix" but simply called them "Matrix-operator")
Then C provides the coefficients for the formal power-series for b^x, the second power C^2 the coefficients for the formal power series for b^b^x and so on. See the columns of the (top-left segment of) matrix C:
[attachment=1113]
In the first column we have the coefficients for (b^x)^0; in the second that for (b^x)^1 , in the third that for (b^x)^2 and so on. To check the coefficients in the columns just type (10^10)^x and ((10^10)^x)^2 and ((10^10)^x)^3) into Pari/GP.

But while the integer powers of C (and thus the integer iterates of b^x) can easily be computed, fractional powers need eigen-decomposition, such that C = M*D*W (where W=M^-1) and M, D and W must be found by some Eigensystem-solver (for instance using Pari/GP) The crazy observation is, that you need everything with 2500 digits precision if you only use matrixsize 32x32 (matrix-size 16x16 can be made with 200 digits internal precision) for the roots of the characteristic polynomial and the according eigenvectors.

But luckily, because D is diagonal by construction of the eigen-solver, you can apply fractional powers to it by applying fractional powers to its (scalar) diagonal elements. Then, for the half-iterate, you do for instance
C05 = M * D^0.5 * W

and C05 is the Carlemanmatrix for the half iterate of b^x. (Usually we use actually x=1 if we write b^^0.5 for instance)
The top-left segment of C05 based on 32x32-truncation looks like:
[attachment=1114]
and in the second column you find the coefficients for the (approximate) power series (in fact, it is now a polynomial of order 32) for the half iterate of b^x, let's call the function g(x). The occurence suggests, that the coefficients diminuish with higher index, but this might be an artifact because of the truncation, and if I could compute 64x64 it might be that the coefficients still grow higher. But also it might be that the approximation/the overall pattern is already well; and this is backed by the observation, that g(g(x)) approx b^x for small enough x.

If you're really interested I could send you a small collection of Pari/GP-routines to work with this method. Also, for some impression for the compatibility of the diagonalization of the truncated matrixes with the Kneser-method see my small compilation
http://go.helms-net.de/math/tetdocs/Comp...ations.pdf

Have fun -

Gottfried