(SR) So, what's a Tensor, for Chrissake?

Discussion in 'Physics & Math' started by QuarkHead, Nov 24, 2007.

  1. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    This is intended as a rough guide to what tensor is, rather that the usual descriptions of what it does. But first a couple of disclaimers. I am not a physicist, so I may not be able to give much insight into applications. Second, I am human; I make mistakes, and have possibly misunderstood some stuff - please correct me.

    I will assume we all know what's meant by a vector space. The particular properties of such spaces that I will need here, I will explain as we go. If you're not sure what a vector space is, you can think of arrows in the plane (not recommended), or you can accept that the set R of real numbers is a vector space provided only that addition and multiplication are defined.

    OK. Let V be a vector space over some scalar field, say F, and let v be an arbitrary vector in V. I define these vectors as

    \(v=\sum_i \alpha^ie_i,\), where the \(\alpha^i \in F\), and where the set \(\{e_i\} \in V\) is called a basis for V. (We shan't need this equality for now, but we shall when we come to do some notational housework, so keep it in mind)

    Let's just accept as a fact that the cardinality of the set \(\{e_i\}\) defines the dimension of our space V, and for convenience we will say this is finite. Then we see straight away that the vector v depends only on the scalars \(\alpha^i\), where i = 1,2,.....n.

    I will say that, to any vector space V, I can associate linear maps of the form \(f :V \rightarrow W\) and \(\varphi : V \rightarrow F\). I'll explain linearity in a while, but note this:

    The map \(f\) above is technically known as an operator, or linear transformation (vector to vector) and the \(\varphi\) is called a linear functional (vector to scalar); I will use the generic term "linear map", if that's OK with you. You will see the virtue of this in due course.

    So consider the map \(\varphi : V \rightarrow F,\; \varphi(v) =\alpha\). As there is no obvious way to specify the image of all v under \(\varphi\), we must conclude there is a multiplicity of such maps, one for each v in V, in fact. We can gather these maps into a set, and easily show (Exercise!) that this set satisfies the axioms of a vector space (closed under map addition and scalar multiplication, etc)

    Let's call this vector space the dual space to V, and denote it by V*, the space of all maps from V to F, or, if you prefer, \(V^*: V \to F; \; \varphi, \; \psi, .... \in V^*\).

    OK, that's the foreplay done, let's get stuck in (whoops, sorry!). But later for that.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. BenTheMan Dr. of Physics, Prof. of Love Valued Senior Member

    Messages:
    8,967
    Quark---I changed this thread to an alpha thread. Thanks for starting this

    Please Register or Log in to view the hidden image!

     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. BenTheMan Dr. of Physics, Prof. of Love Valued Senior Member

    Messages:
    8,967
    A few things:

    You should mention that n is the dimension of V. A good way to understand this is the imagine a table---if you are moving around the top of a table, you can go in two directions, up and down, or left and right. The number of independant directions that you can move in gives you the dimension of the space. So we live in three dimensions, and we can move in three directions---up/down, left/right, and back/forward.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. BenTheMan Dr. of Physics, Prof. of Love Valued Senior Member

    Messages:
    8,967
    Also, can you clarigy this statement:

    I think I know what ``image'' means, but I'm not sure...
     
  8. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    Yeah, OK, you are right, but I didn't really want go deep into that, as its not really relevant here. But what is relevant, and maybe I should have emphasized, is that although there may be an infinite number of elements (vectors) in my space V, provided only that they can all be expressed as a linear combination of a finite set of basis vectors, then we will agree that the space is finite. It's subtle perhaps, but, as per your example, completely intuitive

    EDIT: OK, I missed something out, as it didn't seem relevant at the time. The cardinality of the set \(\{e_i\}\) defines the dimension of a space iff, for some \(e_j \in \{e_i\}\), there is no non-zero scalar \(\alpha\) for which \(e_j\) is a linear combination of the set \(\{e_i\}_{-j}\), (that is the set \(\{e_i\}\) with \(e_j\) excluded).
    OK, I know you do know. Let f: X -->Y be a function on sets. Then one says that X is the domain of f, and Y is the codomain. The image f(x) of x under f is an element in the codomain, here Y. But note that there may well be some y in Y that are not images of any x in X under f, and that there may well be some y in Y that is the image of multiple x's in X. But I don't really think it's relevant here, not sure, see how we evolve.
     
    Last edited: Nov 24, 2007
  9. BenTheMan Dr. of Physics, Prof. of Love Valued Senior Member

    Messages:
    8,967
    you assume much

    Please Register or Log in to view the hidden image!



    Got it. In high school I think I remember Y being called the ``range'' of X. Is this right?

    Maybe an example, and you can show me where I'm wrong.

    Let f be sin(x), X is the reals and Y is the reals on [-1,1]. So sin(x) maps some real number to the interval [-1,1]. Right? Could I write

    \(\sin(x):\mathbb{R}\rightarrow \mathbb{R}\) ???

    Or do I have to be more specific about the codomain?

    (I will ask all of these stupid questions here, because the post-doc I work with always tells me I don't know enough math!)
     
  10. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    Um... For some function f: X --> Y, Y is the codomain of f, that is the set that f(x) "lives in". The range of f is that sub-set of the domain where, for any actual x in X, we will find all f(x). The image of x under f is the particular element in the range that is f(x) for some x.
    See? I told you you knew it! Here the domain is \(\mathbb{R}\), the codomain is also \(\mathbb{R}\). But the range of sin(x) is [-1, 1], as you say, which is a subset of R. The image of x in R under sine will be of the form \(\sin n \pi \in [-1,1] \subset \mathbb{R}\).

    Hey look, none of this really impacts on the thrust of this thread, so I suggest we leave it for now. But, I'm not sure I explained the dual space very well. I'm in the process of getting drunk, so let's leave it for now.
     
    Last edited: Nov 25, 2007
  11. Communist Hamster Cricetulus griseus leninus Valued Senior Member

    Messages:
    3,026
    Slightly off topic, but is University Level (BSc) anything like this? If I can't handle this kind of maths, should I give up and do chemistry instead?
     
  12. superluminal I am MalcomR Valued Senior Member

    Messages:
    10,876
    In BSc electrical engineering we generally venture into integral and derivative calculus (analytic geometry), series, differential equations, matrices and vector math. Tensors and more advanced math, I was told, were reserved for math majors and people on a Phd track in physics or other sciences.

    So I suppose it depends on what is required to be a competent chemist?
     
  13. przyk squishy Valued Senior Member

    Messages:
    3,203
    Well where I study, every maths and physics undergraduate sees the general definition of a vector space, basis, and dimension in first-year linear algebra. The dual space and tensors are second-year material. You're not going to be expected to know all this stuff right from day one - QuarkHead is rushing through stuff that I personally learned over a period of months.
     
  14. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    OK, if you think I am pushing on too fast, just say, anyone.

    Let me make a general comment first: I guess I inherited my taste for abstract formalism from my tutor, and, as I don't really use too much mathematics in my real life, I can afford that luxury.

    But, I may be able to justify this approach. Suppose I declare there is a closed binary operation \(+: R \times R \to R\) such that +(1, 2) = 1 + 2 = 3, you would think me a pretentious oaf, right? But this formulation gives us some crucial general information that may help us understand less prosaic things. Here, for example, we have rough idea what a binary operation is, we see what it means for this operation to be closed, and we suspect that a binary operation in some sense requires two copies of R, which, when "stuck together" by the action \(\times\) on R provides us with an element of the form \((a, b) \in R \times R\), an element in the domain of +. As it happens, we shall need precisely this idea in what follows, and we will most certainly not be talking 1 + 2 =3! As I am short of time tonight, let me return to the issue of the dual space V* of V.

    In order to make clearer in what (hopefully) is to follow, I'm going to play a sneaky trick, and change notation slightly. For which I apologize, as this is never kind.

    For each vector \( v \in V\), where V is defined over the field R, I define an element \(\varphi_v \in V^*\) with the property that \(\varphi_v(w) \in R\) for all v, w in V. \(\varphi_v\) is referred to as the dual to the vector v in V for all v.

    These dual vectors I shall from now on refer to as covectors.

    Oh, you will sometimes see these called 1-forms; I should avoid the terminology if I were you, as some writers call 1-forms covectors, as we just did, others call 1-forms a field of covectors. This is just one of a few areas in this subject where the terminology is highly muddled

    I hope the above is a little less confusing. Soon, all will become quite clear, I think.
     
  15. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    OK, what have we got so far? We have a space \(V^*\) of linear maps \( V^*: V \to R, \;\; \varphi_v(w) = a \in R\) for all v, w in V. Linear maps are defined as follows:

    For some \(\alpha \in R\) and \(v, \; w \in V\), the map \(f\) is said to be linear if

    \(f(v+w) = f(v) + f(w)\) and \(f(\alpha v)= \alpha f(v)\)

    A map \(g: V \times V \to R \) is said to be bilinear if, for all \(\alpha, \; \beta \in R\) and \( u,\;v,\;w \in V\), and remembering that the elements in \(V \times V\) are of the form (u, v),

    \(g(\alpha u + \beta v,\; w) = \alpha g(u, w) + \beta g(v, w)\) and

    \( g(u, \;\alpha v + \beta w) = \alpha g(u, v) + \beta g(u, w)\).

    That is, linear in each argument taken independently of the other.

    So I defined V* as the space of linear maps V to R. I now define a bilinear map according to the definition above \(g: V \times V \to R\) with the property that

    \(g(u, v) = g(v, u)\) (only true for real spaces!)

    \(g(u,v) \ge 0, \; g(u,v) = 0 \Rightarrow u = 0\) or \(v = 0\)

    I will call the bilinear form g(u, v) the inner product of u and v. And if I choose to think of this informally as the projection of u along v, I may say it defines the "angle" between u and v. This also allows me to think of the projection of v along v, that is, (v, v) as being the "length" of v. It's called the "norm" of v and written ||v||. These are of course measurements of a kind.

    Now not all spaces admit of an inner product, but where they do do, they are called inner product spaces and are but one example of metric spaces. This will be of some slight interest soon, I hope.

    I guess I'll pause here for a while, but don't worry (if you're still awake) - we're in sight if our quarry!
     
  16. shalayka Cows are special too. Registered Senior Member

    Messages:
    201
    This is deadly! Thanks.
     
  17. §outh§tar is feeling caustic Registered Senior Member

    Messages:
    4,832
    How about defining tensors for modules of commutative rings with identity? One particular algebraic property of linear spaces is of no consequence in the definition of tensor.
     
  18. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    Well, you are welcome to do that, once we have established the simplest possible definition of a tensor space. You may disagree with my approach, it's probable you know more that I do, but l started down what I believe to be he most intuitive path. But I don't agree that this approach "is of no consequence", especially as an heuristic exercise.

    Let me also say this, for general consumption; a vector space, as we defined it, is an example of a module over a commutative ring, with identity. To thus extend the definition of a tensor and the space it lives in would merely be to generalize the concept we are close to defining for the simplest possible case. You are welcome to disagree now, or generalize later.
     
    Last edited: Nov 26, 2007
  19. BenTheMan Dr. of Physics, Prof. of Love Valued Senior Member

    Messages:
    8,967
    I think that QuarkHead should finish his exposition in the manner which he intended, and others may expound/improve upon this subject when QuarkHead is finished. Untill then, he is the teacher and we are the students.
     
  20. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    Teacher? Me? Ha! Anyway, let's muddle on

    This is the crucial post, so let's take it slowly. Consider the vector \(\varphi_v \in V^*\). As a map it is a linear map. We know, by the vector space axioms, that the permitted binary operations on V*, i. e. vector addition and scalar multiplication, produce a new vector, which will also be a linear map, another element in \(V^*: V \to R\). Let's see that in grisly detail.

    Let \(\varphi_u = \sum_i \alpha_i\epsilon^i\) and \(\varphi_v = \sum_j\beta_j \epsilon^j\) be arbitrary elements in V*, where the \( \alpha_i,\; \beta_j\) are scalar, and the \(\epsilon^i,\;\epsilon^j\) are basis vectors and i, j run over the dimension of the space V*. Then since scalar addition takes the obvious form, and our indices are meaningless labels, we will have (provided we remain on the same basis) that

    \(\varphi_u + \varphi_v = \sum_k\gamma_k\epsilon^k = \varphi_w\).

    I want to come up with a new sort of space, i.e. a sort of de-luxe space of vector-like objects, so it seems that I will require a new binary operation on V* that, for example, will take two such linear maps and yield a bilinear map.

    So I define the binary operation \(\otimes: V^* \times V*, \; \;\;\otimes(\varphi_u, \varphi_v) = \varphi_u \otimes \varphi_v \) with the property that, for all u, v, w, x in V

    \( V^* \otimes V^*: V \times V \to R,\;\;\;\varphi_u \otimes \varphi_v(w,x) = \varphi_u(w)\varphi_v(x)\) and with the following bilinear relations;

    \((\varphi_u + \varphi_v) \otimes \varphi_w = \varphi_u \otimes \varphi_w + \varphi_v \otimes \varphi_w\) and

    \(\varphi_u \otimes(\varphi_v + \varphi_w) = \varphi_u\otimes \varphi_v + \varphi_u\otimes \varphi_w\) and

    \(\alpha(\varphi_u\otimes \varphi_v) = \alpha \varphi_u \otimes \varphi_v + \varphi_u \otimes \alpha \varphi_v\)

    This operation is referred to as the tensor product of elements in V*, and I will therefore call the space \(V^* \otimes V^*\) of bilinear maps \( V^* \otimes V^*: V \times V \to R\) a tensor space whose elements of the form \(\varphi_u \otimes \varphi_v\) are tensors. Yay!

    Let's see: \(\varphi_u \otimes \varphi_v = \sum_i\sum_j\alpha_i\beta_j\epsilon^i\otimes \epsilon ^j = \sum_{i,j}\alpha_{ij}\epsilon^i\otimes \epsilon^j\).

    So that, whereas the covectors depend on the n (say) quantities indexed by the i (or j, or k), the tensor just described may depend on the \(n^2\) quantities \( i\cdot j\). The exponent on the n here defines our tensor to be of rank 2.

    It easily follows that a tensor dependent on \(n^1 = n\) quantities is a rank 1 tensor, i.e. a vector, and that a tensor dependent on \(n^0 = 1\) quantities is a rank 0 tensor, i.e. a scalar. But note that the converse need not be true (I think?).

    So there's just one more bit of wizardry to perform, clean up our notation and then have some fun!

    PS: How do I stop my LaTex graphics from jumping up and down like that? [itex] doesn't seem to work.
     
  21. §outh§tar is feeling caustic Registered Senior Member

    Messages:
    4,832
    Well I did not mean your approach is of no consequence. And I don't know more than you do :shrug:
     
  22. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    Well, fair enough, but you don't convince me about your level of knowledge.

    But look, all, I just had a thought; I said
    and you may be thinking "this is not how tensors normally look. Someone's off their head, and I'm sure it's not me".

    Well, if you were to look closely at my whole post, you will see that I already told how the usual notation is derived. If you can wait, I will explain shortly. Don't panic, I have very nearly half an idea what I'm talking about!
     
  23. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    I think that, before proceeding, I'd better fix up my notation. So look at the
    Note that provided we agree to work on a fixed basis, vectors and tensors are fully specified by the scalar coefficients of the basis vectors. And since, by an arbitrary convention, the indices labelling these scalars are written as subscripts, I will write these in this form on my tensor, thus \(C_{ij}\).

    This merely tells me that here I am looking at a rank 2 tensor derived from the tensor product of covectors, and which may depend on \(n^2\) quanties. This guy is called a rank 2 tensor of type (0, 2).

    We find the higher rank tensors by forming multilinear maps in the obvious way.

    Now let's see how we find the type (2, 0). We may proceed in an entirely analgous fashion, provided that we are willing to recognize the following. The dual space V* is a perfectly respectable vector space, and as such is quite entitled to its own dual space - (V*)*, or V**, the space of all linear functionals V* to R.

    Now, although V and V* are isomorphic, they are not naturally so; I am free to choose any one of an infinite number of possible bases for V, and each choice will change the nature of the isomorphism. However, it is a famous result in vector space theory that there is a natural isomorphism between V and V** provided they are both finite dimensional. Under these circumstances I can sort of think of v in V also as an element in V**, and therefore as a linear functional on V*.

    So \(v(\varphi_w) \in R\) and \(V \otimes V: V^* \times V^* \to R\) and, analagously to our earlier work, the tensor \( v \otimes w\) is a rank 2 tensor. Remembering that, for vectors, we wrote the indices on scalar coefficients as superscripts, so that this becomes \( C^{jk}\) is a type (2, 0) tensor.

    Note only that, from the above \( V^* \otimes V: V \times V* \to R\) and we can define the type (1, 1) tensor \(C^k_i\).

    And we're done.

    Whew!
     

Share This Page