Equality between vectors and tensors of different rank and dimension

Discussion in 'Physics & Math' started by Pete, Feb 21, 2010.

  1. Pete It's not rocket surgery Registered Senior Member

    Messages:
    10,167
    I'm putting together a small javascript program that includes a vector class (may be generalised to a tensor class, but probably not). It's not important to the functioning of the program, but just out of curiosity...
    Is the following equality true or false?

    \(\begin{pmatrix} 1 \\ 2 \end{pmatrix} = \begin{pmatrix} 1 \\ 2 \\ 0 \end{pmatrix}\)

    Similarly, what about comparing tensors:

    \(\begin{pmatrix}1 & 0 & 0 \\ 2 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = \begin{pmatrix} 1 \\ 2 \end{pmatrix}\)

    True or false?
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. AlphaNumeric Fully ionized Registered Senior Member

    Messages:
    6,702
    You can only equate tensors if they have the same rank and their indices range over the same values. If not then they cannot be equal.

    You can make isomorphisms between different rank tensors, in that sometimes you can write the same mathematical structure in two different ways, in terms of different tensors, yet them make the same predictions. To equate them would still be wrong, but I wouldn't be surprised if plenty of textbooks or papers are a bit fast and loose with 'equate' and 'equivalent'. The examples you give as such, in that you can embed a 2 dimensional vector into a 3 dimensional vector by virtue of the fact \(\mathbb{R}^{2} \subset \mathbb{R}^{3}\) and this generalised to matrix groups and more complicated vector spaces but they are not equal.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. prometheus viva voce! Registered Senior Member

    Messages:
    2,045
    You can relate a tensor to another of different dimension but the same rank by using a pullback (or pushforward I guess). I almost always think of this in terms of a metric tensor so I'll use that as an example; suppose I have a map \(X^\mu\) that takes the bulk metric to some lower dimensional submanifold. The pullback of the metric (or the induced metric) is \(\gamma_{a b} = \partial_a X^\mu \partial_b X^\nu g_{\mu \nu}\). The a and b indices run over the dimension of the submanifold, the greek indices run over the bulk manifold meaning the dimension of the map is the same as the dimension of the bulk metric. Summation over repeated indices is implied.

    The long and the short of this is that you can definitely relate the vectors you've got as your first example by a pullback.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Pete It's not rocket surgery Registered Senior Member

    Messages:
    10,167
    Thanks Alpha, that's what I needed.
    Thank you too, Prometheus, although what you said went over my head. I guess I won't be modelling tensors any time soon.
     
  8. prometheus viva voce! Registered Senior Member

    Messages:
    2,045
    Let's try and make it a bit clearer with an example. Suppose you have a three dim vector you want to change to a 2 dim one like in your first example. If I want just to look at the x y plane at some constant value of z then the map I would choose would be the following \(X^x = x \quad X^y = y \quad X^z = \mathrm{Constant}\). Since a vector has only one index the pullback is \( v'_a = \left(\partial_a X^i\right) v_i\) and as we can choose the x and y coordinates to be shared between the 2 and 3 dim systems life is simple - the only contribution to the x and y components of the 2 vector are the x and y components of the 3 vector respectively, since all other possible terms are killed by the derivative. It hopefully won't take to much thought to convince yourself that a vector \(v = \begin{pmatrix}v_x \\ v_y \\ v_z\end{pmatrix}\) will be pulled back to simply \(v' = \begin{pmatrix}v_x \\ v_y\end{pmatrix}\).

    It might make it easier to think about it in this way The formula \( v'_a =\left( \partial_a X^i \right)v_i\) gives you the values of v' component by component. If you want to work out \(v'_x\) for example then you just work out \(\left(\partial_x X^i\right) v_i\) with i summed over. The derivative with respect to x makes all contributions other than that of \(v_x\) vanish.
     
  9. AlphaNumeric Fully ionized Registered Senior Member

    Messages:
    6,702
    I think perhaps an example of Prom's stuff would help. In general relativity you can define a volume of a region of space by the metric, so say \(g_{\mu\nu}\) tells me about a 3 dimensional region. But suppose I'm not interested in the entire space, I just want to consider a sheet of paper, which is 2 dimensional, in that space. Obviously \(g_{\mu\nu}\) still contains that information but how do I 'pull' it out? By using the 'pull back' Prom describes. If you know where the sheet is in space you can pull \(g_{\mu\nu}\) down onto the sheet using Prom's method and you have reduced a 3 dimensional setup to a 2 dimensional one.

    This is a dynamical thing, as it involves derivatives and you can construct equations of motion using it (which is why Prom and I know about it) and your examples are a lot simpler. Since \(\mathbb{R}^{2} \subset \mathbb{R}^{3}\) you can define the map \(\varphi : (a,b) \to (a,b,0)\). This is a 'nice' map because its a homomorphism, in that I can either add two vectors (a,b)+(x,y) = (a+x,b+y) and then map them to (a+x,b+y,0) or I can map them to (a,b,0) and (x,y,0) and then add them to (a+x,b+y,0) to get the same result. Its not an isomorphism as there are some elements in \(\mathbb{R}^{3}\) which don't have a preimage in \(\mathbb{R}^{2}\), what maps to (a,b,1)?
     

Share This Page