05/13/2008, 07:58 AM

I think a nice way to discuss the power series idea where x is a vector is within the context of tensor calculus, or multilinear algebra as its called now. Roughly speaking, tensors are just multi-dimensional arrays that have additional properties. Here is a brief overview of my understanding of tensors.

is a contravariant 1-tensor (or vector or column vector), this means that it transforms contravariantly (whatever that means), and it has 1 dimension of indeces. As a mixed tensor, this is also called a (1,0)-tensor or a -tensor.

is a covariant 1-tensor (or covector or row vector), which means that it is like a function that takes a vector and gives a scalar (0-tensor) as output. As a mixed tensor, this is also called a (0,1)-tensor or a -tensor.

is a mixed variance 2-tensor, or a vector valued function (a matrix), meaning it is both covariant (i) and contravariant (j). This is a combination of both descriptions taken above, because (like a matrix) it is a function that sends a vector (i.e. covariance) to a vector (i.e. contravariance). Although I have not seen anything to support this view of covariance/contravariance, it seems like it doesn't contradict any description either. So one way to think of it is that "contravariant" means "gives a vector" and "covariant" means "takes a vector". This is called a 2-tensor, but to distinguish between each kind of index, this can also be called a (1,1)-tensor.

is a covariant 1-tensor (known as the gradient), but its different in that it is also an operator, which takes a function (covector), and gives a covector. In order to replace the concept of a derivative (or the Jacobian matrix) we will not be using it as an operator directly, but we will be multiplying it with the tensor product which means it will operate very much like the Jacobian when used on a (1,1)-tensor.

is a contravariant 1-tensor, because it is like a vector, that holds all zeros.

is the application of a 2-tensor function to a 1-tensor, which is actually a form of tensor contraction or matrix multiplication in this case (but this is only true for linear functions, non-linear functions cannot be written this way). Notice that I don't use the Einstein notation (also known as index notation) because I think it is confusing when you can't see the summation . For linear functions, this is the power series of that function. However, what we want to do is generalize this so that this works for non-linear functions as well.

is a contravariant 1-tensor (or vector or column vector), this means that it transforms contravariantly (whatever that means), and it has 1 dimension of indeces. As a mixed tensor, this is also called a (1,0)-tensor or a -tensor.

is a covariant 1-tensor (or covector or row vector), which means that it is like a function that takes a vector and gives a scalar (0-tensor) as output. As a mixed tensor, this is also called a (0,1)-tensor or a -tensor.

is a mixed variance 2-tensor, or a vector valued function (a matrix), meaning it is both covariant (i) and contravariant (j). This is a combination of both descriptions taken above, because (like a matrix) it is a function that sends a vector (i.e. covariance) to a vector (i.e. contravariance). Although I have not seen anything to support this view of covariance/contravariance, it seems like it doesn't contradict any description either. So one way to think of it is that "contravariant" means "gives a vector" and "covariant" means "takes a vector". This is called a 2-tensor, but to distinguish between each kind of index, this can also be called a (1,1)-tensor.

is a covariant 1-tensor (known as the gradient), but its different in that it is also an operator, which takes a function (covector), and gives a covector. In order to replace the concept of a derivative (or the Jacobian matrix) we will not be using it as an operator directly, but we will be multiplying it with the tensor product which means it will operate very much like the Jacobian when used on a (1,1)-tensor.

is a contravariant 1-tensor, because it is like a vector, that holds all zeros.

is the application of a 2-tensor function to a 1-tensor, which is actually a form of tensor contraction or matrix multiplication in this case (but this is only true for linear functions, non-linear functions cannot be written this way). Notice that I don't use the Einstein notation (also known as index notation) because I think it is confusing when you can't see the summation . For linear functions, this is the power series of that function. However, what we want to do is generalize this so that this works for non-linear functions as well.