(SR) Manifolds

Discussion in 'Physics & Math' started by QuarkHead, Dec 5, 2007.

  1. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    Not sure what you mean, temur. S is a set of points, and is already an element in T, so even if such a union made sense (on reflection, I'm sure it doesn't), then S union T = T. So what do you mean by "points and sets of points "mixed" together."? S is a set, a set, mind, of points, an element in T.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. temur man of no words Registered Senior Member

    Messages:
    1,330
    Suppose that \(S=\{x,y,z\}\) and that \(T\) is something like \(T=\{\emptyset,\{x,y,z\},\{x,y\},\ldots\}\). Now take the union of these two:

    \(S\cup T=\{x,y,z,\emptyset,\{x,y,z\},\{x,y\},\ldots\}\).

    It is not the same as

    \(\{S\}\cup T =T\), or

    \(S\cup\left(\bigcup_{A\in T} A\right)=\{x,y,z\}=S\).
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    Well, I still cannot convince myself that \(\{x,y,z, \emptyset, \{xyz\},\{x,y\}...\}\) qualifies as a set. But whether or not it does, it's irrelevant here: a topology \(T\) on the set \(S\) is defined as a subset of the powerset on \(S\). That is to say, it is a set of sets, so the above is neither part of the topology nor its complement.

    Anyway, I am pleased there is some discussion about this, and I shan't develop the theory of manifolds until all on board are cool with the basics.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. §outh§tar is feeling caustic Registered Senior Member

    Messages:
    4,832
    As an aside, that is not a set since the ellipse is undefined. The ZF axioms allow it to be a set if you are more 'precise'. Such a discussion has no place in physics, of course

    Please Register or Log in to view the hidden image!

    .
     
  8. temur man of no words Registered Senior Member

    Messages:
    1,330
    The ellipse was a kind of meta level symbol meaning that you can put anything you want there, but before discussing if it is a set or not. For simplicity you can remove the ellipse.
     
  9. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    OK, given a few moments of (relative) sobriety, let's continue.

    First recall that a manifold is a topological space that is covered by a countable collection of neighbourhoods, each of these being an open set containing some point \(p \in M \).

    Recall also that we require all functions on \(M\) to be continuous, that is of class \(C^0\).

    Let's now endow our manifold with a richer structure: I will say that, for some function \(f\) on \(M\), then \(f\) is of class \(C^1\) if it is differentiable, and in general, that a function is of class \(C^k\) if it is differentiable up to and including k-th order.

    We will now insist that our manifold, to be of any real interest, is of class \(C^{\infty}\), that is. continuous derivatives of all imaginable orders exist. Such a function is called a smooth function, and a manifold whose functions are all smooth is called a smooth manifold.

    So let \(M\) be a smooth manifold in this sense. I now want to define a vector space on \(M\), a particular sort of vector space called a tangent space.

    OK, so what is a tangent on a manifold? Suppose the 2-sphere to be a 2-manifold (which it most certainly is). Provided I know what I mean by a "curve" in \(S^2\), say longitude, or latitude, I can define a tangent on this manifold as a projection from some point \(P \in M\) relative to some set of coordinates, right? But note this crucial point; these tangents "occupy" some other space, here, say the Euclidean space \(E^3\), and the tangents will be relative to the coordinates on \(E^3\). So here, \(E^3\) will be called the embedding space of \(R^2\)

    This situation is, in general, very undesirable, and we need to find way of doing something very similar without recourse to an embedding space.

    I'm sure you're bored of this for now, but this line of thinking leads very naturally to the Lie groups that Ben (and I) are so keen on. Later, when I sober up!
     
  10. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    So. Tangent spaces in the manifold \(M\), these being the vector space of all vectors tangent to our manifold at the points \(p \in M\).

    Well, there are several ways of doing this, and I'm going to try a hybrid approach, which may fall through the cracks, as it were, so bear with me.

    First notice that the classical notion of a vector as an abstract object with a sense of "length and direction" doesn't really apply in the theory of manifolds; ideas like length, distance, angle, speed etc. are not well-defined here.

    But note this. For some vector \(v \in E^3\) at the point \(p\in E^3\), where \(E^3\) has the usual Cartesian coordinates \(\{x,y,z\}\), then for some arbitrary displacement \( \Delta_{x,y,z}\) of \(v\), I may have that, for \(\alpha,\;\beta,\;\gamma\) as scalar that

    \( v = (\alpha \frac{\partial}{\partial x} + \beta \frac{\partial}{\partial y} + \gamma \frac{\partial}{\partial x})|_p\)

    where the vertical bar means "evaluated at". This is called the directional derivative on \(E^3\), and fully defines the vector \(v\)

    This is elementary!

    By a simple generalization, we may define some vector \(v \in U \subset M\) as \(v = \sum_i \alpha_i \;\frac{\partial}{\partial x^i}|_p\), where the \(\{x^i\}\) are local coordinates on \(U\).

    And we know, from the theory of vector spaces on \(E^n\) that the sum of scalar multiples on the set of basis vectors defines a vector as:

    \(v = \sum_i \alpha^i\; e_i\), where the \(e_i\) are basis vectors

    so we may assume that the differential operators \(\frac{\partial}{\partial x^i}|_p\) are a basis for some vector space in \(U\). (Actually, this requires a proof, but it's not hard, I think)

    Changing gear ever so slightly, I will define the space of all smooth (i. e. class \(C^{\infty}\)) real-valued functions on \(U\) as the space \(F^{\infty}\), that is, for all \(f \in F^{\infty}, \; f: U \to R\).

    Allow me also the liberty of defining, for ease of notation, \(\partial_i \equiv \frac{\partial}{\partial x^i}\). So that the usual rules for derivatives implies that, for all \(f,\;g \in F^{\infty}\) and \(\alpha\;\beta\) as scalar, then the following applies:

    \(\partial_i(\alpha f + \beta g)= \alpha \partial_i f + \beta \partial_i g \) (linearity),

    \(\partial_i(fg) = f(\partial_i g) + g(\partial_i f) \) (law of Leibnitz, or of derivations)

    Now note that an operator is a gadget that takes a function and returns a scalar. So I may now define the tangent space \(T_p(M)\) at the point \(p \in M\) as the set of all maps of this form, that is, for all \( X \in T_p(M), \; X: F^{\infty} \to R \) that satisfy the above conditions; that is

    \(X(\alpha f +\beta g) = \alpha(Xf) + \beta(X g)\) and

    \(X(fg) = (Xf)g + f(Xg)\) when evaluated at the point \(p\).

    Hmm. So maybe you're wondering......yes, but this space is tangent to what exactly? Good point.
     
  11. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    Well, to answer my last question, each vector \(X \in T_p(M)\) is tangent, at \(p\), to some curve \(\gamma\) that "passes through" \(p\). I had been going to cover this in some detail, but decided against. Suffice to remark, that such curves are found as maps from the real line \(R^1\) to \(U \subset M\).

    Anyway, lest it may appear that my definition of the tangent space was somewhat arbitrary, let me show that any abstract object \(X\) satisfying

    a) \( X(\alpha f + \beta g) =\alpha(Xf) + \beta (Xg)\) and
    b) \(X(fg)= (Xf)g + f(Xg)\)

    is indeed a tangent vector.

    Notice first that condition (a) says that \(X\) is a vector of some sort. To show that \(X\) is a tangent, it will suffice to show that when it is tangent to the curve of a constant function, its slope is zero.

    So let \(f = g = 0\). Then by (a), \(X(0) = 0\).

    By letting \( f=g=1\) in (b), we find that \(X(1)+X(1) =X(1)\) only when \(X(1)=0\)

    and by setting \(f=1,\;g=0,\; \alpha =0\) in (a) again yields \(X(c) =0\) (where \(c\) is a constant function). In other words, for any constant function \(c\),

    \(X(c) = cX(1)= c(X(1) + X(1)) = 2cX(1)\), so that \(X(c)\) is identically zero for all constant functions \(c\).

    This establishes the result.
     
  12. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    So, I reckon this as far as I had originally planned to go with this thread. We could quite naturally (I think) go from here to the Lie Groups and their algebras, but, as I seem to have lost an audience, maybe a new thread would be in order? Anybody any thoughts? Anyone reading this? Hello......?
     
  13. temur man of no words Registered Senior Member

    Messages:
    1,330
    Another way would be to go to the Riemannian manifolds, curvature, connection etc.
     
  14. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    Sure, but I don't really understand the mathematics there. If you can help us out, that would be great!
     
  15. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    Now lookee here. QuarkHead gets rather bored with talking to himself. He knows that he is, because there are a couple of serious errors in earlier presentations that were not detected by other members. Shame on them!

    This is also a shame, as, walking the dogs today, I came up with a neat way of connecting all I said so far with the real Lie algebras.

    Ah well, if nobody is interested in this stuff, which I do find a bit surprising, I shall leave it for now
     
  16. BenTheMan Dr. of Physics, Prof. of Love Valued Senior Member

    Messages:
    8,967
    Quarkhead---

    Apologies. I have been quite busy during the ``holidays'' so I have not been able to moderate this discussion closely (cf zephir's trolling).

    Either way, I hope to participate more fully in this discussion once things get settled down.
     
  17. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    So, I'm going to break my promise and finish this thread - partly because I just hate unfinished projects, but mainly because what remains is such fun, I just have to share it! However, I warn you that the ascent from now on is pretty steep, and will certainly require ropes,
    ice-picks and possibly even oxygen. But the view from the top is worth it, I promise you.

    Recall first we defined the space \(F^{\infty}\) of all smooth maps on our manifold \(M\). Obviously, a function that is smooth at \(p \in M\) need not be smooth at \(q \in M\), so let's write \(F_p^{\infty}\) to denote the space of all continuous, infinitely differentiable (i.e. smooth) functions at the generic point \(p \in M\). This space is a vector space.

    Recall next we had a vector space \(T_p(M)\) of operators, which we called tangent vectors, \(t: F_p^{\infty} \to \mathbb{R}\) where the operator \(t\) is a differential operator of the the form \(\sum_i\; \alpha^i\partial_i\), and where, for the coordinate set \(\{x^i\} \in U\) that \( \partial_i \equiv \frac{\partial}{x^i}\), and also where \(U\) is a coordinate neighbourhood of the point \(p\).

    Yes, it's dense enough, I guess, but we've done all this, just look back.

    So.

    A tangent vector field is defined to be a rule \(X\) that assigns to each point \(p \in M\) a single element of it's associated tangent space \(T_p(M)\) (Yes, I'm being naughty with notation here - earlier I had \(X\) as a tangent vector, whereas here it is a vector field. Sorry about that). Evidently, by the foregoing, the field \(X\) is also a differential operator.

    Obviously, because of the way we constructed the tangent space \( T_p(M)\), there will be an infinity of such tangent fields. Let's call the set of all such fields on \(M\) as \(\mathfrak{X}(M)\).

    It is relatively easy to see that \(\mathfrak{X}(M)\) is a vector space, and also easy to see that for each field \(X \in \mathfrak{X}(M)\), there is a derivation given by

    \(X(fg)=(Xf)g+f(Xg) \; \forall f,g \in F_p^{\infty}\).

    However, look at the operator \(XY\) where \(X \in \mathfrak{X}(M)\) and \( Y \in \mathfrak{X}(M) \), here I have that

    \(XY(fg) = X((Yf)g + f(Yg)) = (XYf)g+(Yf)(Xg)+(Xf)(Yg) +f(XYg)\).

    therefore, because of the second and third terms above, \(XY\) is not a derivation, and thus cannot be an element in \(\mathfrak{X}(M)\).

    But note this.

    An operator (linear transformation) on a vector space is, of course a matrix, and we know from our school level matrix/operator algebra that, for operators/matrices \(X,\;Y\) that \(XY \ne YX\) in general; rather we have that \(XY-YX=[X,Y]\), where \([X,Y]\) is referred to as the commutator of X and Y. In the present case, we shall refer to it as the Lie bracket.

    So doing the same in the above, I set \((XY-YX)(fg)=[X,Y](fg)\), and with a bit of simple arithmetic, find that

    \([X,Y](fg) = ([X,Y]f)g +f([X,Y]g)\) which is a derivation. I conclude then that the Lie bracket of a pair of vector fields is itself a vector field.

    Finally note that, from the above, and by simple arithmetic, we will have that

    \([X,[Y,Z]]f = X([Y,Z]f)-[Y,Z](Xf) = X(Y(Zf))-X(Z(Yf)-Y(Z(Xf))+Z(Y(Xf))\)

    and likewise for \([Y,[Z,X]]f\) and \([Z[Y,X]]f\), where by simply adding these three equalitites, we will find that all terms cancel identically in pairs, hence the Jacobi identity

    \([X,[Y,Z]] + [Y,[X,Z]]+ [Z,[X,Y]] = 0\)

    We also easily find that \( [X,Y] = -[Y,X]\) (skew symmetry) and bilinearity, which latter I hope I don't need to remind you of. These three conditions are sufficient to define a Lie algebra; that is, whenever there is a vector space \(V\) over \(\mathbb{R}\) and a map \(V \times V \to V\) which assigns to each pair \(X,\;Y \in V\) a unique element \([X,Y] \in V\) whenever this element satisfies the conditions of skew symmetry, bilinearity and the Jacobi identity, it will will be called a real Lie algebra.

    I therefore leave you with the really sweet result:

    the set \(\mathfrak{X}(M)\) of vector fields on \(M\) is a Lie algebra over \(\mathbb{R}\)
     
    Last edited: Jan 4, 2008
  18. temur man of no words Registered Senior Member

    Messages:
    1,330
    I can write the basics, shall I open a new thread or continue here?
     
  19. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    Well, since I introduced the Lie algebra, I thought it might be kinda fun to go on and talk about the Lie groups, following that thought-train, as it were. Dunno, maybe it's not relevant in this thread.

    So yeah, maybe it's I who should start a new thread on that.

    Don't get me wrong - I can write down the affine connection in terms of the generalized del operator, I can explain parallel transport, the Riemann metric and all that, and I have texts that I can copy out of. But if I don't feel completely at home with the math, which I don't, then I am just being a fraud.

    So yes, do continue with Riemannian geometry in this thread, I shall look forward to that, and maybe try to think of a user-friendly way of starting up a new thread on Lie groups (if anyone wants, that is..........?)
     
  20. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    OK. I showed, perhaps not very convincingly, that the space \(\mathfrak{X}(M)\) of all vector fields on \(M\) is a real Lie algebra. A natural question is this:

    Is there a connection between a Lie algebra and a Lie group? The answer, in one direction is easy: to each Lie group \(G\), I can associate a Lie algebra \(\mathfrak{G}\). Going in the other direction, that is, to each Lie algebra, can I associate a Lie group? The answer is, yes, sort of, but requires a lot more work. Hmm, the algebra should be lower case fraktur "g", but this doesn't seem to work here.

    So now I am in a dilemma. Is this best done here, or in a new thread? Or not at all? What say you all?

    I hesitate to start a new "tutorial", as I feel I run the risk of seeming arrogant, self-serving, showing off, or any other unflattering epithet you care to come up with. Please guide me, someone!
     
  21. temur man of no words Registered Senior Member

    Messages:
    1,330
    You can continue here. If you leave this thread to me alone it would be too quite.
     
  22. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    OK, boss. Umm, let's see. Let me start my justification for including the Lie groups here with a

    Definition; A Lie group is a differentiable manifold with the algebraic structure of a group.

    Yes, quite, not very illuminating, is it? So what follows, initially, is to guide our intuition only, and should not be taken too seriously (we'll get pretty abstract soon enough!).

    So, Lie groups describe natural symmetries. Suppose, for example, I hand you an object, and you want to discover if it is symmetrical. You have two options, right? You can rotate it around in all available dimensions, making sure you don't blink (smooth operation), or you can walk around it in all available directions, again completely smoothly. And both these operations must be reversible (invertible) right?

    So the act of rotating the object, or, equivalently, walking around it, are referred to as linear transformations, for which I shall give a definition shortly. But first note that, "shape symmetry" may not be all that I am interested in, so there is a class of such transformations, each of which give you information about different sorts of continuous symmetries.

    I guess I can turn this around, sort of; there is a class of smooth linear transformations, each of which preserves some symmetry properties of some abstract object. Each set in this class has the algebraic structure of a group, which in due course, I'll explain.

    So, enough intuition, let's get dirty.

    Let \( V\) be a vector space over the field \(\mathbb{F}\). I define the gadget \(A:V \to V\) to be a linear transformation aka linear operator on \(V\) iff, for all generic \(v,\;w \in V\) and any \(\alpha,\;\beta \in \mathbb{F}\) that \( Av = w\) and

    \(A(\alpha v + \beta w) = \alpha Av + \beta Aw\).

    (Note that, since \(v,\;w\) are discrete elements in \(V\), in general \(A\) is not a smooth transformation, that is of class \(C^{\infty}\); shortly I shall insist on this).

    Hmm, enough for now; what's next? Oh yes, groups I think
     
  23. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    Hmm, I seem to forced myself into the situation where I have come clean, and confess that I am talking exclusively about the matrix Lie groups (I rather suspect, though, that any Lie group will turn out to be isomorphic to some matrix group, but I haven't checked).

    First off, let's see how some arbitrary linear transformation \(A:V\to V\)becomes a matrix. Suppose that, for scalars \(\alpha^1, \; \alpha^2\) the vector \(v = \alpha^1 e_1 + \alpha^2 e_2\) where \(e_1,\;e_2\) are basis vectors and the superscripts are just labels, not powers.

    Then I write the transformed vector \(Av = v' = \alpha'^1 e_1 + \alpha'^2 e_2\) where

    \(\alpha'^1 e_1 = A^1(\alpha^1 e_1 + \alpha^2 e_2)\)
    \(\alpha'^2 e_2 = A^2(\alpha^1 e_1 + \alpha^2 e_2)\)

    Then provided I stay on the same basis, and (for purely aesthetic reasons), lowering the index on the untransformed scalar components, for the general] form I can write this as the matrix

    \(\begin{pmatrix}A^1_1 & A^1_2\\ \\ A^2_1 & A^2_2 \end{pmatrix}\).

    Note that the entries in this matrix are just numbers, nothing more or less. OK, so from now on I shall use the terms matrix and transformation interchangeably, if that's OK with you.

    Let's denote the set of all invertible linear transformations \(V \to V\) for some \( V\) over the field \(\mathbb{F}\) by \( GL(V,\mathbb{F})\), where GL means general linear. But, we know, by a basic theorem of vector space theory, that for any vector space \(V\) over \(\mathbb{F}\), and of dimension \(n\), there is a natural isomorphism \(V \cong F^n\), where \(F^n\) is a subspace of the field \( \mathbb{F}\). According I can write the set of all linear transformations as \(GL(n,\mathbb{F})\).

    Remembering that we require our transformations to be invertible, which implies their matrix representations also be invertible, which in turn requires that their matrix representations have non-vanishing determinant, I will say that \(GL(n,\mathbb{F})\) is the set of all \( n \times n\) matrices, with entries from the field, and \(\det \ne 0\).

    So, invertibility means, of course, that for all \(M \in GL(n,\mathbb{F})\) I will have \(MM^{-1} = M^{-1}M = I\) where \(I\) is the identity matrix. So long as I further insist that, for all \(M,\;N \in GL(n,\mathbb{F})\) that \(MN \in GL(n,\mathbb{F})\), I will say that \(GL(n,\mathbb{F})\) has the algebraic structure of a group - the group of all linear transformations, aka, the group of all \(n \times n\) matrices with non-vanishing determinant.

    OK, now I need to convince you this is also a manifold, right?
     
    Last edited: Jan 10, 2008

Share This Page