• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 Tensor power series andydude Long Time Fellow    Posts: 509 Threads: 44 Joined: Aug 2007 05/22/2008, 12:58 AM Usually, most tensors are used for general relativity, and other metric space related activities, so each index is between 1-3 (or 1-4 for general relativity). But here, on the other hand, each index is between 1-2, which is the number of variables we are using, or the dimension of the vector space that the big-vector-function is defined over. So for the 3-dimensional case, a (n, m)-tensor would have components, and for the 2-dimensional case (which I used earlier), a (n, m)-tensor would have components. However, this still a special case, since tensor spaces are (in general) a Cartesian product (well, tensor product, technically) of vector spaces, so when all of the vector spaces have the same dimension, then the above is true, but it is possible to form the tensor product of a 2-D and a 4-D space, but I can't think of any examples of this. Relating matrix notation and tensor notation might also be helpful. In general, matrix multiplication is: in matrix notation and in tensor notation (see here). Hmm, you said you needed more basics, so I will try and cover the basics. There are two major kinds of tensor multiplication. One is called tensor product (or "vector space" tensor product on MathWorld) and the other is called tensor contraction (same on MathWorld). In some respects, these are analogous to "column-row" matrix multiplication and "row-column" matrix multiplication respectively. A "column-row" matrix multiplication (a special case of tensor product) can be visualized as: and a "row-column" matrix multiplication (a special case of tensor contraction) can be visualized as: So we can see that in general, the tensor product of an (a, b)-tensor and a (c, d)-tensor will give a (a+c, b+d)-tensor, because none of the indices will go away. We can also see that the tensor contraction will decrease the total rank of a single tensor by 2, so we can think of tensor contraction as the tensor product (putting everything in a single tensor, say A), with an extra step of summing indices of that tensor, like for example. Also, since tensor contraction always requires one subscript and one superscript, the tensor contraction of an (a, b)-tensor and a (c, d)-tensor will give a (a+c-1, b+d-1)-tensor, and the tensor contraction of a single (a, b)-tensor with itself will give a (a-1, b-1)-tensor. And just in case it isn't clear by now, an (a, b)-tensor has a "tensor rank" of (a + b), so some places will refer to tensor contraction resulting in a tensor of rank (a + b - 2). While I was thinking about this, I was modeling with the Mathematica dot operator (see under Properties & Relations), which is a generalization of the dot product and matrix multiplication, only for tensors. Formally, there is no "single" tensor contraction, because the convention is that whenever you find a repeated index variable (for example: k) used as both a superscript and a subscript, then there is a tensor contraction that sums both of those over all possible values of k (for example: 1, 2, 3 for space vectors). However, the Mathematica dot operator is a special case of tensor contraction that assumes that the inner-most indices are being summed over. So for complicated tensors (and ) this gives: which is actually 4 tensor contractions (because it uses 4 repeated indices), but is only 1 tensor contraction (which uses 1 repeated index), and this is what is meant by the generalized "dot" operator. The only problem with this kind of tensor contraction is that it can only be applied once, and it is really the first case that we need for tensor power series, not the second case. But because the first case is so hard to write, I invented the notation to express this. The reason why I chose this notation, is because it is similar to our iteration notation , which is supposed to be a sort of "power" where composition is the "multiplication". So noticing that it just seemed natural to use this notation. What do you think? Would be a better notation? I think it is hard to find a good notation, because the process of "wrapping" up the x's into a tensor requires (tensor product), but as soon as you try and do tensor contraction with , then it requires (Mathematica dot) with each x separately, I'm really not sure what the best way to write this is. Since tensor notation was designed for these difficulties, it might be best to stay with it. I think one place where the notation gets really confusing is evaluating the multiple-gradient at zero (or whatever the expansion point is). Re-reading my earlier posts, I get the impression that F(0) could be interpreted as apply F to the zero vector, then take the multiple-gradients, but what is intended is to apply the multiple-gradients, then evaluate this "derivative"-like thing at the zero vector. « Next Oldest | Next Newest »

 Messages In This Thread Tensor power series - by andydude - 05/13/2008, 07:58 AM RE: Tensor power series - by andydude - 05/13/2008, 07:59 AM RE: Tensor power series - by andydude - 05/13/2008, 08:11 AM RE: Tensor power series - by andydude - 05/14/2008, 06:18 AM RE: Tensor power series - by Gottfried - 05/20/2008, 08:39 PM RE: Tensor power series - by andydude - 05/22/2008, 12:58 AM RE: Tensor power series - by andydude - 05/22/2008, 04:11 AM RE: Tensor power series - by andydude - 05/22/2008, 04:36 AM RE: Tensor power series - by bo198214 - 05/24/2008, 10:10 AM RE: Tensor power series - by andydude - 06/04/2008, 08:08 AM

 Possibly Related Threads... Thread Author Replies Views Last Post Perhaps a new series for log^0.5(x) Gottfried 3 1,103 03/21/2020, 08:28 AM Last Post: Daniel A Notation Question (raising the highest value in pow-tower to a different power) Micah 8 4,514 02/18/2019, 10:34 PM Last Post: Micah Taylor series of i[x] Xorter 12 14,302 02/20/2018, 09:55 PM Last Post: Xorter Functional power Xorter 0 1,623 03/11/2017, 10:22 AM Last Post: Xorter An explicit series for the tetration of a complex height Vladimir Reshetnikov 13 14,622 01/14/2017, 09:09 PM Last Post: Vladimir Reshetnikov Complaining about MSE ; attitude against tetration and iteration series ! tommy1729 0 1,951 12/26/2016, 03:01 AM Last Post: tommy1729 2 fixpoints , 1 period --> method of iteration series tommy1729 0 2,000 12/21/2016, 01:27 PM Last Post: tommy1729 2 fixpoints related by power ? tommy1729 0 1,778 12/07/2016, 01:29 PM Last Post: tommy1729 Taylor series of cheta Xorter 13 15,472 08/28/2016, 08:52 PM Last Post: sheldonison Inverse power tower functions tommy1729 0 2,142 01/04/2016, 12:03 PM Last Post: tommy1729

Users browsing this thread: 1 Guest(s) 