Tetration Forum
Tensor power series - Printable Version

+- Tetration Forum (https://tetrationforum.org)
+-- Forum: Tetration and Related Topics (https://tetrationforum.org/forumdisplay.php?fid=1)
+--- Forum: Mathematical and General Discussion (https://tetrationforum.org/forumdisplay.php?fid=3)
+--- Thread: Tensor power series (/showthread.php?tid=160)



Tensor power series - andydude - 05/13/2008

I think a nice way to discuss the power series idea where x is a vector is within the context of tensor calculus, or multilinear algebra as its called now. Roughly speaking, tensors are just multi-dimensional arrays that have additional properties. Here is a brief overview of my understanding of tensors.

\( X = x^i \) is a contravariant 1-tensor (or vector or column vector), this means that it transforms contravariantly (whatever that means), and it has 1 dimension of indeces. As a mixed tensor, this is also called a (1,0)-tensor or a \( \left(\begin{tabular}{c}1 \\ 0\end{tabular}\right) \)-tensor.

\( f_i \) is a covariant 1-tensor (or covector or row vector), which means that it is like a function that takes a vector and gives a scalar (0-tensor) as output. As a mixed tensor, this is also called a (0,1)-tensor or a \( \left(\begin{tabular}{c}0 \\ 1\end{tabular}\right) \)-tensor.

\( F = f^j_i \) is a mixed variance 2-tensor, or a vector valued function (a matrix), meaning it is both covariant (i) and contravariant (j). This is a combination of both descriptions taken above, because (like a matrix) it is a function that sends a vector (i.e. covariance) to a vector (i.e. contravariance). Although I have not seen anything to support this view of covariance/contravariance, it seems like it doesn't contradict any description either. So one way to think of it is that "contravariant" means "gives a vector" and "covariant" means "takes a vector". This is called a 2-tensor, but to distinguish between each kind of index, this can also be called a (1,1)-tensor.

\( {\nabla}_i \) is a covariant 1-tensor (known as the gradient), but its different in that it is also an operator, which takes a function (covector), and gives a covector. In order to replace the concept of a derivative (or the Jacobian matrix) we will not be using it as an operator directly, but we will be multiplying it with the tensor product \( \otimes \) which means it will operate very much like the Jacobian when used on a (1,1)-tensor.

\( 0^i \) is a contravariant 1-tensor, because it is like a vector, that holds all zeros.

\( F(X) = \sum_i f^j_i(x^i) \) is the application of a 2-tensor function to a 1-tensor, which is actually a form of tensor contraction or matrix multiplication in this case (but this is only true for linear functions, non-linear functions cannot be written this way). Notice that I don't use the Einstein notation (also known as index notation) because I think it is confusing when you can't see the summation \( \Sigma \). For linear functions, this is the power series of that function. However, what we want to do is generalize this so that this works for non-linear functions as well.


RE: Tensor power series - andydude - 05/13/2008

To illustrate this new derivative, we can denote the Jacobian by \( F' \) where \( F = f^j_i \), so that
\( F(X) = \sum_i
f^j_i (x^i) \)
is a 1-tensor (a vector), because the function is applied to the vector (and because the only remaining index is j), if the function was not applied to the vector, then it would be a matrix.
\( F'(X) = \sum_i {\nabla}_k \otimes
f^j_i (x^i) \)
is a 2-tensor (a matrix) just like the Jacobian is. But then we notice that
\( F''(X) = \sum_i {\nabla}_l \otimes {\nabla}_k \otimes
f^j_i (x^i) \)
is a 3-tensor (not a matrix) but it is just as useful as a derivative, because we can use it in a power series expansion, but not for linear functions. For linear functions \( F''(X) = 0^j_{kl} \) but for non-linear functions, it can be non-zero (but we also have to write f differently, and not as a summation).
So we have to rewrite the expressions above as:
\( F'(X) = {\nabla}_k \otimes F (X) \)
\( F''(X) = {\nabla}_l \otimes {\nabla}_k \otimes F (X) \)
but what we want is not the derivatives in general, but the derivatives at zero, for the power series expansion, so we substitute X=0:
\( F'(0^i) = {\nabla}_j \otimes F (0^i) \)
\( F''(0^i) = {\nabla}_k \otimes {\nabla}_j \otimes F (0^i) \)
now every time we do tensor contraction of these with a vector, we effectively reduce the rank by 1, so \( \sum_j {\nabla}_j \otimes F (0^i) \otimes x^j \) is a 1-tensor (or vector), and \( \sum_k \sum_j {\nabla}_k \otimes {\nabla}_j \otimes F (0^i) \otimes x^j \otimes x^k \) is also a 1-tensor (or vector), which means they are of the same dimensions, which means we can add them, which means we can use these to form a power series.


RE: Tensor power series - andydude - 05/13/2008

Before we do that though, there are still a few things I would like to clear up. First notation, lets start using Einstein's summation convention (because we need it now), and let
\( \nabla_{\otimes n}F(0)x^{\otimes n}
= \left(\prod_i^n {\nabla}_{(k_i)}\right) \otimes F(0)
\otimes \left(\prod_i^n x^{(k_i)} \right) \)
which allows expressing the power series as:
\( F(X)
= \sum_{n=0}^{\infty} \frac{1}{n!} \nabla_{\otimes n}F(0)X^{\otimes n}
\)
and the Bell "matrix" as: \( B[F] = \nabla_{\otimes j}(F^{\otimes k})(0) \)
and the Vandermonde "vector" as: \( V(X) = X^{\otimes k} \)

Secondly, we need an example, like the Mandelbrot set function:
\( M\left[\begin{tabular}{c}
z \\ c
\end{tabular}\right]
=
\left[\begin{tabular}{c}
z^2 + c \\ c
\end{tabular}\right] \)
we can see that the value at zero is zero:
\( M\left[\begin{tabular}{c}
0 \\ 0
\end{tabular}\right]
=
\left[\begin{tabular}{c}
0 \\ 0
\end{tabular}\right] \)
which means we have a fixed point. Also, the first derivative is:
\( M'\left[\begin{tabular}{c}
0 \\ 0
\end{tabular}\right]
=
\left[\begin{tabular}{cc}
2z & 1 \\
0 & 1 \\
\end{tabular}\right]_{(z,c)=(0,0)}
=
\left[\begin{tabular}{cc}
0 & 1 \\
0 & 1 \\
\end{tabular}\right] \)
and the second derivative is:
\( M''\left[\begin{tabular}{c}
0 \\ 0
\end{tabular}\right]
=
\left[\begin{tabular}{cc}
[2\ 0] & [0\ 0] \\
[0\ 0] & [0\ 0] \\
\end{tabular}\right] \)
and from this it should be clear that the third derivative is zero. Now we can form a power series for this function as follows:
\(
M(X) = \left[\begin{tabular}{cc}
0 & 1 \\
0 & 1 \\
\end{tabular}\right] \left[\begin{tabular}{c}
z \\ c
\end{tabular}\right]
+
\frac{1}{2!}
\left[\begin{tabular}{cc}
[2\ 0] & [0\ 0] \\
[0\ 0] & [0\ 0] \\
\end{tabular}\right] \left[\begin{tabular}{c}
z \\ c
\end{tabular}\right]^{\otimes 2}
\)

Another example we could try is the logistic map:
\( L\left[\begin{tabular}{c}
z \\ r
\end{tabular}\right]
=
\left[\begin{tabular}{c}
r z (1-z) \\ r
\end{tabular}\right] \)
which has the power series expansion:
\(
L(X) = \left[\begin{tabular}{cc}
0 & 0 \\
0 & 1 \\
\end{tabular}\right] \left[\begin{tabular}{c}
z \\ r
\end{tabular}\right]
+
\frac{1}{2!}
\left[\begin{tabular}{cc}
[0\ 0] & [1\ 0] \\
[1\ 0] & [0\ 0] \\
\end{tabular}\right] \left[\begin{tabular}{c}
z \\ r
\end{tabular}\right]^{\otimes 2}
+ \frac{1}{3!}
\left[\begin{tabular}{cc}
\left[\begin{tabular}{cc}
0 & 0 \\ -2 & 0
\end{tabular}\right] &
\left[\begin{tabular}{cc}
-2 & 0 \\ 0 & 0
\end{tabular}\right] \\
\left[\begin{tabular}{cc}
-2 & 0 \\ 0 & 0
\end{tabular}\right] &
\left[\begin{tabular}{cc}
0 & 0 \\ 0 & 0
\end{tabular}\right] \\
\end{tabular}\right] \left[\begin{tabular}{c}
z \\ r
\end{tabular}\right]^{\otimes 3}
\)

Now for the fun part. This kind of framework would be general enough to express the exponential factorial as an iterated function! This could be expressed as:
\( E\left[\begin{tabular}{c}
z \\ n
\end{tabular}\right]
=
\left[\begin{tabular}{c}
n^z \\ n + 1
\end{tabular}\right]
\)
and one thing I am curious about is whether regular iteration methods will work on this new power series ring, and whether they could help define the exponential factorial over the real numbers, as opposed to the integers.

Gottfried also brought up a good point. The Bell matrix and Vandermonde vector should be defined first, and well understood, before other iterational things. But this would cause much confusion, because even the Vandermonde vector (V) would not even be a tensor! This is because \( V_1 \) would be a (1,0)-tensor, and \( V_2 \) would be a (2,0)-tensor, and \( V_3 \) would be a (3,0)-tensor, which means whatever container you use to hold these (V) would not have a single rank (tensors must have a rank) and this would have all ranks... very confusing... I must think about this.

Andrew Robbins

PS. I think I forgot a factorial somewhere...
PPS. [update]I fixed the factorials[/update]


RE: Tensor power series - andydude - 05/14/2008

At first, I thought you wouldn't need the factorials you usually need in normal power series, because all the coefficients in F' and F'' were less than the factorial, but then I realized that there were multiple coefficients in each tensor. So taking the logistic map as an example again:
\(
\left[\begin{tabular}{cc}
[0\ 0] & [1\ 0] \\
[1\ 0] & [0\ 0] \\
\end{tabular}\right] \left[\begin{tabular}{c}
z \\ r
\end{tabular}\right]^{\otimes 2}
= \left[\begin{tabular}{c}
zr + rz \\ 0
\end{tabular}\right]
= \left[\begin{tabular}{c}
2rz \\ 0
\end{tabular}\right]
\)
so
\(
\frac{1}{2!}
\left[\begin{tabular}{cc}
[0\ 0] & [1\ 0] \\
[1\ 0] & [0\ 0] \\
\end{tabular}\right] \left[\begin{tabular}{c}
z \\ r
\end{tabular}\right]^{\otimes 2}
= \left[\begin{tabular}{c}
rz \\ 0
\end{tabular}\right]
\)
as it should, and doing the same thing for the third term:
\(
\frac{1}{3!}
\left[\begin{tabular}{cc}
\left[\begin{tabular}{cc}
0 & 0 \\ -2 & 0
\end{tabular}\right] &
\left[\begin{tabular}{cc}
-2 & 0 \\ 0 & 0
\end{tabular}\right] \\
\left[\begin{tabular}{cc}
-2 & 0 \\ 0 & 0
\end{tabular}\right] &
\left[\begin{tabular}{cc}
0 & 0 \\ 0 & 0
\end{tabular}\right] \\
\end{tabular}\right] \left[\begin{tabular}{c}
z \\ r
\end{tabular}\right]^{\otimes 3}
= \frac{1}{3!}\left[\begin{tabular}{c}
-2z^2r -2zrz -2rz^2 \\ 0
\end{tabular}\right]
= \left[\begin{tabular}{c}
-rz^2 \\ 0
\end{tabular}\right]
\)
as it should.

Andrew Robbins


RE: Tensor power series - Gottfried - 05/20/2008

Hi Andrew -

this is a nice collection of work! However the formalism is too dense for me, I need more examples. I tried to get a basic grip of tensors with some questions in the german newsgroup de.sci.mathematik but unfortunately, my basic questions were answered in a way, that I still didn't get the basics well. So I think, I'll look at my "mathematical tables" to get a first understanding about the structure of the components in the tensors before I try to learn about the formalisms/abbreviations for the mathematical operations between them. (I even didn't get a definitive answer yet, how many components a 2-D-tensor (generalization of a matrix) has: nxn? or by the additional notations of indexes upper and lower (2n)x(2n) ?
There's obviously much to learn...

Anyway again thanks for the compilation and extraction of important things. I think it will be much helpful after I got the basics.

Gottfried


RE: Tensor power series - andydude - 05/22/2008

Usually, most tensors are used for general relativity, and other metric space related activities, so each index is between 1-3 (or 1-4 for general relativity). But here, on the other hand, each index is between 1-2, which is the number of variables we are using, or the dimension of the vector space that the big-vector-function is defined over. So for the 3-dimensional case, a (n, m)-tensor would have \( 3^{n+m} \) components, and for the 2-dimensional case (which I used earlier), a (n, m)-tensor would have \( 2^{n+m} \) components. However, this still a special case, since tensor spaces are (in general) a Cartesian product (well, tensor product, technically) of vector spaces, so when all of the vector spaces have the same dimension, then the above is true, but it is possible to form the tensor product of a 2-D and a 4-D space, but I can't think of any examples of this.

Relating matrix notation and tensor notation might also be helpful. In general, matrix multiplication is:
\( A B = C \) in matrix notation and \( A^i_k B^k_j = C^i_j \) in tensor notation (see here).

Hmm, you said you needed more basics, so I will try and cover the basics. There are two major kinds of tensor multiplication. One is called tensor product (or "vector space" tensor product on MathWorld) and the other is called tensor contraction (same on MathWorld). In some respects, these are analogous to "column-row" matrix multiplication and "row-column" matrix multiplication respectively.

A "column-row" matrix multiplication (a special case of tensor product) can be visualized as:
\(
A \otimes D =
\left[\begin{tabular}{c}
a \\ b \\ c
\end{tabular}\right]
\left[\begin{tabular}{ccc}
d & e & f
\end{tabular}\right]
=
\left[\begin{tabular}{ccc}
ad & ae & af \\
bd & be & bf \\
cd & ce & cf
\end{tabular}\right]
\)
and a "row-column" matrix multiplication (a special case of tensor contraction) can be visualized as:
\(
A \cd D =
\left[\begin{tabular}{ccc}
a & b & c
\end{tabular}\right]
\left[\begin{tabular}{c}
d \\ e \\ f
\end{tabular}\right]
=
ad + be + cf
\)

So we can see that in general, the tensor product of an (a, b)-tensor and a (c, d)-tensor will give a (a+c, b+d)-tensor, because none of the indices will go away. We can also see that the tensor contraction will decrease the total rank of a single tensor by 2, so we can think of tensor contraction as the tensor product (putting everything in a single tensor, say A), with an extra step of summing indices of that tensor, like \( \sum_k A^{ik}_{kj} \) for example. Also, since tensor contraction always requires one subscript and one superscript, the tensor contraction of an (a, b)-tensor and a (c, d)-tensor will give a (a+c-1, b+d-1)-tensor, and the tensor contraction of a single (a, b)-tensor with itself will give a (a-1, b-1)-tensor. And just in case it isn't clear by now, an (a, b)-tensor has a "tensor rank" of (a + b), so some places will refer to tensor contraction resulting in a tensor of rank (a + b - 2).

While I was thinking about this, I was modeling with the Mathematica dot operator (see under Properties & Relations), which is a generalization of the dot product and matrix multiplication, only for tensors. Formally, there is no "single" tensor contraction, because the convention is that whenever you find a repeated index variable (for example: k) used as both a superscript and a subscript, then there is a tensor contraction that sums both of those over all possible values of k (for example: 1, 2, 3 for space vectors). However, the Mathematica dot operator is a special case of tensor contraction that assumes that the inner-most indices are being summed over.

So for complicated tensors (and \( B = x^{\otimes 4} \)) this gives:
\( A^{k}_{nmji} B^{ijmn} = (((A \cd x) \cd x) \cd x) \cd x \)
which is actually 4 tensor contractions (because it uses 4 repeated indices), but
\( A^{k}_{abci} B^{idef} = A \cd B \)
is only 1 tensor contraction (which uses 1 repeated index), and this is what is meant by the generalized "dot" operator. The only problem with this kind of tensor contraction is that it can only be applied once, and it is really the first case that we need for tensor power series, not the second case. But because the first case is so hard to write, I invented the \( x^{\otimes 4} \) notation to express this. The reason why I chose this notation, is because it is similar to our iteration notation \( f^{\circ n} \), which is supposed to be a sort of "power" where composition \( (\circ) \) is the "multiplication". So noticing that \( B^{ijmn} = x \otimes x \otimes x \otimes x = x^{\otimes 4} \) it just seemed natural to use this notation. What do you think? Would \( x^{\otimes 4} = (\cd x)^4 \) be a better notation? I think it is hard to find a good notation, because the process of "wrapping" up the x's into a tensor requires \( \otimes \) (tensor product), but as soon as you try and do tensor contraction with \( A^{k}_{nmji} \), then it requires \( \cd \) (Mathematica dot) with each x separately, I'm really not sure what the best way to write this is. Since tensor notation was designed for these difficulties, it might be best to stay with it.

I think one place where the notation gets really confusing is evaluating the multiple-gradient at zero (or whatever the expansion point is). Re-reading my earlier posts, I get the impression that F(0) could be interpreted as apply F to the zero vector, then take the multiple-gradients, but what is intended is to apply the multiple-gradients, then evaluate this "derivative"-like thing at the zero vector.


RE: Tensor power series - andydude - 05/22/2008

So now for more examples, well, how about the last example I gave, the exponential factorial. The function I gave was defined as: \( E(z, n) = (n^z, n+1) \) using tuple-notation as opposed to matrix-notation.

Whatever happens with this tensor power series stuff, it should produce the known power series for a function. So for the 2-variable case (actually, this only applies to one component), the power series should be the same as:
\(
\begin{tabular}{rlll}
E(X) & =
E(0, 1) &
+ \left[\frac{\partial E}{\partial n}\right]_{(z, n)=(0, 1)} (n-1) &
+ \left[\frac{\partial^2 E}{\partial n^2}\right]_{(z, n)=(0, 1)} \frac{(n-1)^2}{2!} \\ &
+ \left[\frac{\partial E}{\partial z}\right]_{(z, n)=(0, 1)} z &
+ \left[\frac{\partial^2 E}{\partial z n}\right]_{(z, n)=(0, 1)} z (n-1) &
+ \left[\frac{\partial^3 E}{\partial z n^2}\right]_{(z, n)=(0, 1)} z \frac{(n-1)^2}{2!} \\ &
+ \left[\frac{\partial^2 E}{\partial z^2}\right]_{(z, n)=(0, 1)} \frac{z^2}{2!} &
+ \left[\frac{\partial^3 E}{\partial z^2 n}\right]_{(z, n)=(0, 1)} \frac{z^2}{2!} (n-1) &
+ \left[\frac{\partial^4 E}{\partial z^2 n^2}\right]_{(z, n)=(0, 1)} \frac{z^2}{2!} \frac{(n-1)^2}{2!} \\ &
+ \cdots
\end{tabular}
\)

First thing to notice is that we can't make a series about (0, 0), because \( 0^0 \) is indeterminate, and the next derivative involves \( 1/0 \) which is infinite. So what we really need is a series about (0, 1) since \( E(0, 1) = (1^0, 1+1) = (1, 2) \) which is finite, but unfortunately, this is not a fixed point, so we can't use regular iteration. That's OK, though, because we're just talking about power series. Using the standard term for Taylor series \( (X - X_0)^n \) where each of the X's are vectors, we find that \( (z, n) - (0, 1) = (z, n-1) \) so the tensor power series will look like:
\(
E(z, n) = (1, 2) + \sum_{k=1}^{\infty} \frac{1}{k!}
\left([\nabla_{\otimes k}E(X)]_{X=(0, 1)} \right)*(z, n-1)^{\otimes k}
\)
where \( (*) \) actually represents k tensor contractions.

The second thing to notice is that using Taylor-Puiseux conversion (from my Abel function post), We can actually find the power series for the top part of the vector function \( n^z \) about (0, 1), which is:
\(
n^z = \sum_{k=0}^{\infty} \frac{(n-1)^k}{k!} \sum_{j=0}^{k} \left[ {k \atop j}\right] z^j
\)

So what do all of the \( [\nabla_{\otimes k}E(X)] \) look like? Well, a simple way to look at it is that each term in the tensor power series is a vector that contains all polynomials of degree k. So, for example, the 2nd term will be a vector that holds all coefficients of \( z^2 \), \( zn \), and \( n^2 \) since these are all polynomials of degree 2. The 3rd term will be a vector that holds all coefficients of \( z^3 \), \( z^2n \), \( zn^2 \), and \( n^3 \), and so on.

Here is the Mathematica code for calculating the multiple-gradients:
Code:
GradientD[expr_List, vars_List, 0] := expr;
  GradientD[expr_List, vars_List, 1] := GradientD[expr, vars];
  GradientD[expr_List, vars_List, n_] :=
    GradientD[GradientD[expr, vars], vars, n - 1];
  GradientD[expr_List, vars_List] :=
    Map[Function[{f}, Map[Function[{x}, D[f, x]], vars]], expr];
and here is the Mathematica code for the tensor power series of the exponential factorial:
Code:
{1, 2} +
  {{0, 0},
   {0, 1}}.{z, n-1} +
  {{{0, 1}, {1, 0}},
   {{0, 0}, {0, 0}}}.{z, n-1}.{z, n-1}/2! +
  {{{{0,  0}, {0, -1}},
    {{0, -1}, {-1, 0}}},
   {{{0,  0}, {0,  0}},
    {{0,  0}, {0,  0}}}}.{z, n-1}.{z, n-1}.{z, n-1}/3! +
  {{{{{0, 0}, {0, 2}},
     {{0, 2}, {2, 2}}},
    {{{0, 2}, {2, 2}},
     {{2, 2}, {2, 0}}}},
   {{{{0, 0}, {0, 0}},
     {{0, 0}, {0, 0}}},
    {{{0, 0}, {0, 0}},
     {{0, 0}, {0, 0}}}}}.{z, n-1}.{z, n-1}.{z, n-1}.{z, n-1}/4!

To show you exactly how Mathematica evaluates these expressions:
Code:
{{0, 0}, {0, 1}}.{z, n-1}
  = {0, n - 1}
and the next term:
Code:
{{{0, 1}, {1, 0}}, {{0, 0}, {0, 0}}}.{z, n-1}.{z, n-1}/2!
  = {{n - 1, z}, {0, 0}}.{z, n-1}/2!
  = {2 (n - 1) z, 0}/2!
  = {(n - 1) z, 0}
and the next term:
Code:
{{{{0,0},{0,-1}},{{0,-1},{-1,0}}},{{{0,0},{0,0}},{{0,0},{0,0}}}}.{z, n-1}.{z, n-1}.{z, n-1}/3!
  = {{{0, 1 - n}, {1 - n, -z}}, {{0, 0}, {0, 0}}}.{z, n-1}.{z, n-1}/3!
  = {{-(n - 1)^2, -2 (n - 1) z}, {0, 0}}.{z, n-1}/3!
  = {-3 (n - 1)^2 z, 0}/3!
  = {-(n - 1)^2 z/2, 0}

Just to wrap up, and take a step back: although this may seem a bit messy, if this is the only way to convert current (univariate) analytic iteration theory to apply to multi-dimensional dynamical systems, then I think it is worth it.

Besides... we have computers now Smile

Andrew Robbins


RE: Tensor power series - andydude - 05/22/2008

I just realized I never explained how that function is related to the exponential factorial. The exponential factorial is defined as \( EF(x) = x^{EF(x-1)} \) where \( EF(1) = 1 \), which is a univariate function. To make this function from the function above, you can first notice the pattern in its iterates:
\(
\begin{tabular}{rl}
E^{\circ 1}(c, 1) = E(c, 1) & = (1, 2) \\
E^{\circ 2}(c, 1) = E(1, 2) & = (2^1, 3) \\
E^{\circ 3}(c, 1) = E(2, 3) & = (3^{2^1}, 4) \\
E^{\circ 4}(c, 1) = E(9, 4) & = (4^{3^{2^1}}, 5)
\end{tabular}
\)
where c could theoretically be anything, since \( 1^c = 1 \). And since I haven't indicated whether we are using 0-based or 1-based vectors, I'm going to use another contraction to get the top value:
\(
EF(x) = \left[{1 \atop 0}\right] \cd E^{\circ x}\left[{c \atop 1}\right]
\)
Now it should be obvious from this that \( EF(0) = c \) which begs the question in my Conjectures post: what is EF(0)?

Andrew Robbins


RE: Tensor power series - bo198214 - 05/24/2008

Are there by the way some extension (at best with uniqueness criterions) of the exponential factorial to real/complex height out there? I would swear that it also can be captured by the universal uniqueness criterion.


RE: Tensor power series - andydude - 06/04/2008

bo198214 Wrote:Are there by the way some extension (at best with uniqueness criterions) of the exponential factorial to real/complex height out there? I would swear that it also can be captured by the universal uniqueness criterion.

I do not believe that such an extension has been looked at before, the only person who has worked on them I know of is Jonathan Sondow who helped write MathWorld's exp.fac. page, which isn't really about an extension, but one value.

Andrew Robbins