Tensor power series
I think a nice way to discuss the power series idea where x is a vector is within the context of tensor calculus, or multilinear algebra as its called now. Roughly speaking, tensors are just multi-dimensional arrays that have additional properties. Here is a brief overview of my understanding of tensors.

is a contravariant 1-tensor (or vector or column vector), this means that it transforms contravariantly (whatever that means), and it has 1 dimension of indeces. As a mixed tensor, this is also called a (1,0)-tensor or a -tensor.

is a covariant 1-tensor (or covector or row vector), which means that it is like a function that takes a vector and gives a scalar (0-tensor) as output. As a mixed tensor, this is also called a (0,1)-tensor or a -tensor.

is a mixed variance 2-tensor, or a vector valued function (a matrix), meaning it is both covariant (i) and contravariant (j). This is a combination of both descriptions taken above, because (like a matrix) it is a function that sends a vector (i.e. covariance) to a vector (i.e. contravariance). Although I have not seen anything to support this view of covariance/contravariance, it seems like it doesn't contradict any description either. So one way to think of it is that "contravariant" means "gives a vector" and "covariant" means "takes a vector". This is called a 2-tensor, but to distinguish between each kind of index, this can also be called a (1,1)-tensor.

is a covariant 1-tensor (known as the gradient), but its different in that it is also an operator, which takes a function (covector), and gives a covector. In order to replace the concept of a derivative (or the Jacobian matrix) we will not be using it as an operator directly, but we will be multiplying it with the tensor product which means it will operate very much like the Jacobian when used on a (1,1)-tensor.

is a contravariant 1-tensor, because it is like a vector, that holds all zeros.

is the application of a 2-tensor function to a 1-tensor, which is actually a form of tensor contraction or matrix multiplication in this case (but this is only true for linear functions, non-linear functions cannot be written this way). Notice that I don't use the Einstein notation (also known as index notation) because I think it is confusing when you can't see the summation . For linear functions, this is the power series of that function. However, what we want to do is generalize this so that this works for non-linear functions as well.
To illustrate this new derivative, we can denote the Jacobian by where , so that

is a 1-tensor (a vector), because the function is applied to the vector (and because the only remaining index is j), if the function was not applied to the vector, then it would be a matrix.

is a 2-tensor (a matrix) just like the Jacobian is. But then we notice that

is a 3-tensor (not a matrix) but it is just as useful as a derivative, because we can use it in a power series expansion, but not for linear functions. For linear functions but for non-linear functions, it can be non-zero (but we also have to write f differently, and not as a summation).
So we have to rewrite the expressions above as:

but what we want is not the derivatives in general, but the derivatives at zero, for the power series expansion, so we substitute X=0:

now every time we do tensor contraction of these with a vector, we effectively reduce the rank by 1, so is a 1-tensor (or vector), and is also a 1-tensor (or vector), which means they are of the same dimensions, which means we can add them, which means we can use these to form a power series.
Before we do that though, there are still a few things I would like to clear up. First notation, lets start using Einstein's summation convention (because we need it now), and let
which allows expressing the power series as:
and the Bell "matrix" as:
and the Vandermonde "vector" as:

Secondly, we need an example, like the Mandelbrot set function:

we can see that the value at zero is zero:

which means we have a fixed point. Also, the first derivative is:

and the second derivative is:

and from this it should be clear that the third derivative is zero. Now we can form a power series for this function as follows:

Another example we could try is the logistic map:

which has the power series expansion:

Now for the fun part. This kind of framework would be general enough to express the exponential factorial as an iterated function! This could be expressed as:

and one thing I am curious about is whether regular iteration methods will work on this new power series ring, and whether they could help define the exponential factorial over the real numbers, as opposed to the integers.

Gottfried also brought up a good point. The Bell matrix and Vandermonde vector should be defined first, and well understood, before other iterational things. But this would cause much confusion, because even the Vandermonde vector (V) would not even be a tensor! This is because would be a (1,0)-tensor, and would be a (2,0)-tensor, and would be a (3,0)-tensor, which means whatever container you use to hold these (V) would not have a single rank (tensors must have a rank) and this would have all ranks... very confusing... I must think about this.

Andrew Robbins

PS. I think I forgot a factorial somewhere...
PPS. [update]I fixed the factorials[/update]
At first, I thought you wouldn't need the factorials you usually need in normal power series, because all the coefficients in F' and F'' were less than the factorial, but then I realized that there were multiple coefficients in each tensor. So taking the logistic map as an example again:


as it should, and doing the same thing for the third term:

as it should.

Andrew Robbins
Hi Andrew -

this is a nice collection of work! However the formalism is too dense for me, I need more examples. I tried to get a basic grip of tensors with some questions in the german newsgroup de.sci.mathematik but unfortunately, my basic questions were answered in a way, that I still didn't get the basics well. So I think, I'll look at my "mathematical tables" to get a first understanding about the structure of the components in the tensors before I try to learn about the formalisms/abbreviations for the mathematical operations between them. (I even didn't get a definitive answer yet, how many components a 2-D-tensor (generalization of a matrix) has: nxn? or by the additional notations of indexes upper and lower (2n)x(2n) ?
There's obviously much to learn...

Anyway again thanks for the compilation and extraction of important things. I think it will be much helpful after I got the basics.

Gottfried Helms, Kassel
Usually, most tensors are used for general relativity, and other metric space related activities, so each index is between 1-3 (or 1-4 for general relativity). But here, on the other hand, each index is between 1-2, which is the number of variables we are using, or the dimension of the vector space that the big-vector-function is defined over. So for the 3-dimensional case, a (n, m)-tensor would have components, and for the 2-dimensional case (which I used earlier), a (n, m)-tensor would have components. However, this still a special case, since tensor spaces are (in general) a Cartesian product (well, tensor product, technically) of vector spaces, so when all of the vector spaces have the same dimension, then the above is true, but it is possible to form the tensor product of a 2-D and a 4-D space, but I can't think of any examples of this.

Relating matrix notation and tensor notation might also be helpful. In general, matrix multiplication is:
in matrix notation and in tensor notation (see here).

Hmm, you said you needed more basics, so I will try and cover the basics. There are two major kinds of tensor multiplication. One is called tensor product (or "vector space" tensor product on MathWorld) and the other is called tensor contraction (same on MathWorld). In some respects, these are analogous to "column-row" matrix multiplication and "row-column" matrix multiplication respectively.

A "column-row" matrix multiplication (a special case of tensor product) can be visualized as:

and a "row-column" matrix multiplication (a special case of tensor contraction) can be visualized as:

So we can see that in general, the tensor product of an (a, b)-tensor and a (c, d)-tensor will give a (a+c, b+d)-tensor, because none of the indices will go away. We can also see that the tensor contraction will decrease the total rank of a single tensor by 2, so we can think of tensor contraction as the tensor product (putting everything in a single tensor, say A), with an extra step of summing indices of that tensor, like for example. Also, since tensor contraction always requires one subscript and one superscript, the tensor contraction of an (a, b)-tensor and a (c, d)-tensor will give a (a+c-1, b+d-1)-tensor, and the tensor contraction of a single (a, b)-tensor with itself will give a (a-1, b-1)-tensor. And just in case it isn't clear by now, an (a, b)-tensor has a "tensor rank" of (a + b), so some places will refer to tensor contraction resulting in a tensor of rank (a + b - 2).

While I was thinking about this, I was modeling with the Mathematica dot operator (see under Properties & Relations), which is a generalization of the dot product and matrix multiplication, only for tensors. Formally, there is no "single" tensor contraction, because the convention is that whenever you find a repeated index variable (for example: k) used as both a superscript and a subscript, then there is a tensor contraction that sums both of those over all possible values of k (for example: 1, 2, 3 for space vectors). However, the Mathematica dot operator is a special case of tensor contraction that assumes that the inner-most indices are being summed over.

So for complicated tensors (and ) this gives:

which is actually 4 tensor contractions (because it uses 4 repeated indices), but

is only 1 tensor contraction (which uses 1 repeated index), and this is what is meant by the generalized "dot" operator. The only problem with this kind of tensor contraction is that it can only be applied once, and it is really the first case that we need for tensor power series, not the second case. But because the first case is so hard to write, I invented the notation to express this. The reason why I chose this notation, is because it is similar to our iteration notation , which is supposed to be a sort of "power" where composition is the "multiplication". So noticing that it just seemed natural to use this notation. What do you think? Would be a better notation? I think it is hard to find a good notation, because the process of "wrapping" up the x's into a tensor requires (tensor product), but as soon as you try and do tensor contraction with , then it requires (Mathematica dot) with each x separately, I'm really not sure what the best way to write this is. Since tensor notation was designed for these difficulties, it might be best to stay with it.

I think one place where the notation gets really confusing is evaluating the multiple-gradient at zero (or whatever the expansion point is). Re-reading my earlier posts, I get the impression that F(0) could be interpreted as apply F to the zero vector, then take the multiple-gradients, but what is intended is to apply the multiple-gradients, then evaluate this "derivative"-like thing at the zero vector.
So now for more examples, well, how about the last example I gave, the exponential factorial. The function I gave was defined as: using tuple-notation as opposed to matrix-notation.

Whatever happens with this tensor power series stuff, it should produce the known power series for a function. So for the 2-variable case (actually, this only applies to one component), the power series should be the same as:

First thing to notice is that we can't make a series about (0, 0), because is indeterminate, and the next derivative involves which is infinite. So what we really need is a series about (0, 1) since which is finite, but unfortunately, this is not a fixed point, so we can't use regular iteration. That's OK, though, because we're just talking about power series. Using the standard term for Taylor series where each of the X's are vectors, we find that so the tensor power series will look like:

where actually represents k tensor contractions.

The second thing to notice is that using Taylor-Puiseux conversion (from my Abel function post), We can actually find the power series for the top part of the vector function about (0, 1), which is:

So what do all of the look like? Well, a simple way to look at it is that each term in the tensor power series is a vector that contains all polynomials of degree k. So, for example, the 2nd term will be a vector that holds all coefficients of , , and since these are all polynomials of degree 2. The 3rd term will be a vector that holds all coefficients of , , , and , and so on.

Here is the Mathematica code for calculating the multiple-gradients:
GradientD[expr_List, vars_List, 0] := expr;
  GradientD[expr_List, vars_List, 1] := GradientD[expr, vars];
  GradientD[expr_List, vars_List, n_] :=
    GradientD[GradientD[expr, vars], vars, n - 1];
  GradientD[expr_List, vars_List] :=
    Map[Function[{f}, Map[Function[{x}, D[f, x]], vars]], expr];
and here is the Mathematica code for the tensor power series of the exponential factorial:
{1, 2} +
  {{0, 0},
   {0, 1}}.{z, n-1} +
  {{{0, 1}, {1, 0}},
   {{0, 0}, {0, 0}}}.{z, n-1}.{z, n-1}/2! +
  {{{{0,  0}, {0, -1}},
    {{0, -1}, {-1, 0}}},
   {{{0,  0}, {0,  0}},
    {{0,  0}, {0,  0}}}}.{z, n-1}.{z, n-1}.{z, n-1}/3! +
  {{{{{0, 0}, {0, 2}},
     {{0, 2}, {2, 2}}},
    {{{0, 2}, {2, 2}},
     {{2, 2}, {2, 0}}}},
   {{{{0, 0}, {0, 0}},
     {{0, 0}, {0, 0}}},
    {{{0, 0}, {0, 0}},
     {{0, 0}, {0, 0}}}}}.{z, n-1}.{z, n-1}.{z, n-1}.{z, n-1}/4!

To show you exactly how Mathematica evaluates these expressions:
{{0, 0}, {0, 1}}.{z, n-1}
  = {0, n - 1}
and the next term:
{{{0, 1}, {1, 0}}, {{0, 0}, {0, 0}}}.{z, n-1}.{z, n-1}/2!
  = {{n - 1, z}, {0, 0}}.{z, n-1}/2!
  = {2 (n - 1) z, 0}/2!
  = {(n - 1) z, 0}
and the next term:
{{{{0,0},{0,-1}},{{0,-1},{-1,0}}},{{{0,0},{0,0}},{{0,0},{0,0}}}}.{z, n-1}.{z, n-1}.{z, n-1}/3!
  = {{{0, 1 - n}, {1 - n, -z}}, {{0, 0}, {0, 0}}}.{z, n-1}.{z, n-1}/3!
  = {{-(n - 1)^2, -2 (n - 1) z}, {0, 0}}.{z, n-1}/3!
  = {-3 (n - 1)^2 z, 0}/3!
  = {-(n - 1)^2 z/2, 0}

Just to wrap up, and take a step back: although this may seem a bit messy, if this is the only way to convert current (univariate) analytic iteration theory to apply to multi-dimensional dynamical systems, then I think it is worth it.

Besides... we have computers now Smile

Andrew Robbins
I just realized I never explained how that function is related to the exponential factorial. The exponential factorial is defined as where , which is a univariate function. To make this function from the function above, you can first notice the pattern in its iterates:

where c could theoretically be anything, since . And since I haven't indicated whether we are using 0-based or 1-based vectors, I'm going to use another contraction to get the top value:
Now it should be obvious from this that which begs the question in my Conjectures post: what is EF(0)?

Andrew Robbins
Are there by the way some extension (at best with uniqueness criterions) of the exponential factorial to real/complex height out there? I would swear that it also can be captured by the universal uniqueness criterion.
bo198214 Wrote:Are there by the way some extension (at best with uniqueness criterions) of the exponential factorial to real/complex height out there? I would swear that it also can be captured by the universal uniqueness criterion.

I do not believe that such an extension has been looked at before, the only person who has worked on them I know of is Jonathan Sondow who helped write MathWorld's exp.fac. page, which isn't really about an extension, but one value.

Andrew Robbins

Possibly Related Threads…
Thread Author Replies Views Last Post
  Discussion on "tetra-eta-series" (2007) in MO Gottfried 10 148 Yesterday, 07:31 AM
Last Post: JmsNxn
  Functional power Xorter 3 3,902 07/11/2022, 06:03 AM
Last Post: Catullus
Question Tetration Asymptotic Series Catullus 18 2,185 07/05/2022, 01:29 AM
Last Post: JmsNxn
Question Formula for the Taylor Series for Tetration Catullus 8 2,129 06/12/2022, 07:32 AM
Last Post: JmsNxn
  Calculating the residues of \(\beta\); Laurent series; and Mittag-Leffler JmsNxn 0 853 10/29/2021, 11:44 PM
Last Post: JmsNxn
  Trying to find a fast converging series of normalization constants; plus a recap JmsNxn 0 780 10/26/2021, 02:12 AM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 1,757 07/22/2021, 03:37 AM
Last Post: JmsNxn
  Perhaps a new series for log^0.5(x) Gottfried 3 5,825 03/21/2020, 08:28 AM
Last Post: Daniel
  A Notation Question (raising the highest value in pow-tower to a different power) Micah 8 13,719 02/18/2019, 10:34 PM
Last Post: Micah
Question Taylor series of i[x] Xorter 12 26,687 02/20/2018, 09:55 PM
Last Post: Xorter

Users browsing this thread: 1 Guest(s)