Yang–Mills and Mass Gap

Discussion in 'Physics & Math' started by Thales, Nov 29, 2017.

  1. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Back to determining the Lie algebra over the trivial group U(1), as the isomorphism SO(2) as a set of 2 x 2 matrices.

    So I can write: \( M(\theta) = I cos\theta + iXsin\theta \), where the matrices I and X are as set out in post 119.

    Now I expand \( cos\theta,\; sin\theta \), and I have:

    \( M(\theta) = I(1 - \frac {\theta^2} {2!} + \frac {\theta^4} {4!} + ... ) + iX(\theta - \frac {\theta^3} {3!} + \frac {\theta^5} {5!} + ... ) \)

    \( = I + i \theta X + \frac {(i\theta X)^2} {2!} + \frac {(i\theta X)^3} {3!} + ... \),

    since \( X^{2n} = I \), and \( X^{2n + 1} = X \), for n = 1,2,3 ...; and of course powers of i alternate: even powers are ±1, odd powers are ±i.

    But this is the Taylor expansion of \( e^{i\theta X} \)! Hence, X is called the generator of the Lie group.

    Not quite there yet . . .
     
    Last edited: Feb 23, 2018
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Now I want to show that the matrix: \(\begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix} \) is the first derivative of \( M(\theta) = \begin{pmatrix} cos \theta & -sin \theta \\ sin \theta & cos \theta \end{pmatrix} \), when evaluated at \( \theta = 0 \).

    So, \( \frac {d} {d\theta} \begin{pmatrix} cos \theta & -sin \theta \\ sin \theta & cos \theta \end{pmatrix} = \begin{pmatrix} -sin \theta & -cos \theta \\ cos \theta & -sin \theta \end{pmatrix} \).

    And \( \begin{pmatrix} -sin \theta & -cos \theta \\ cos \theta & -sin \theta \end{pmatrix}_{\theta = 0} = \begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix} = M'(0)\).

    Hence, \( X = -i M'(0) \).

    The other step involves a Taylor expansion of \( M(\theta) \) around \( \theta = 0 \), or a Maclaurin series.

    Which is: \( M(\theta) = I + \theta M'(0) + \frac {\theta^2M''(0)} {2!} + \frac {\theta^3M''(0)} {3!} +\; ... \)

    Then we can see that, if \( \theta \) is really small, the series is approximately: \( I + \theta M'(0) = I + i\theta X \) (higher order terms are negligibly small).

    . . . and that's more or less it. As \( \theta \) gets smaller and approaches an infinitesimal value, the approximation gets closer.
    As to an adjoint representation, I think that's only relevant when there is more than one parameter for the Lie group (or alternatively, the representation is its own adjoint?).
     
    Last edited: Feb 24, 2018
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. NotEinstein Valued Senior Member

    Messages:
    1,986
    Hansda, can I interpret your silence on this matter as an admission of your error?
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    I think hansda is somewhat confused about how frequency and the period are related in oscillatory motion.

    In particular, what difference it makes to the relation when you choose some scale, such as minutes or seconds (the answer is, no difference at all).
    A frequency (or period) of "one" is physically meaningless.
     
  8. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    More Wiki stuff:
    --https://en.wikipedia.org/wiki/Exponential_function

    --https://en.wikipedia.org/wiki/Matrix_exponential

    It's interesting that also \( e^{0_n} = I_n \), where \( 0_n \) is the n × n 0-matrix. This implies that \( X^{0} = e^{0_n} \), when X is any n × n matrix (and the power of X is zero, of course).

    Perhaps rpenner's posts about matrix exponentials shed some light on this?
     
    Last edited: Feb 26, 2018
  9. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    I really don't understand why you are here, arfa. In post #116 I told you that the exponential map recovers a Lie group frm its algebra. Now are repeating this fact by quoting the Wiki! Why ignore the help being offered here?

    Furthermore it is quite untrue that \(e^0 \) implies a \(0 \times 0 \) matrix. Rather it implies a matrix with zero trace which implies it is an element in the algebra associated to a group with unit determinant via \(e^0=1 \) where \(detA=e^{trX}\) for any \(A \in G\) and some \(X \in \mathfrak {g}\)
     
  10. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    What's a 0x0 matrix?
    Any number, e is a number, with an exponent of 0 is equal to 1, as far as I understand it. But if a number is defined as a 1x1 matrix, then a nxn matrix with exponent of 0 is the nxn identity matrix. That all seems quite logical.

    What changes when the 1x1 matrix is e, and the exponent is a nxn 0-matrix?
     
  11. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
  12. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    There is no such thing as a n x n 0-matrix.
     
    Last edited: Feb 26, 2018
  13. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    --http://mathworld.wolfram.com/ZeroMatrix.html
    ??

    I assume it's ok that m = n, for m x n, so that the zero matrix happens to be a square matrix.

    Also
    That's also what McKenzie has to say about it. And I have seen all this before, back in 2003 or so, except the matrix exponential wasn't explained in terms of a generator (or an element of a Lie algebra). Back then it was about a way to generate rotations of fermion spin, and the algebraic element was a Pauli matrix. It seemed then to be about generating an approximation, by restricting the angle to be small enough.
    I'm just posting, let's call it "supporting evidence". In the interests of clarity and exposition . . . (given that your or my expositories aren't necessarily all that enlightening to other people)
     
    Last edited: Feb 26, 2018
  14. hansda Valued Senior Member

    Messages:
    2,424
    The relation is simple. It is \( fT=1\) or \(f=\frac{1}{T} \) .

    A frequency of "one" relates to maximum time period, for any given interval of time in seconds or minutes or whatever units of time are chosen; over which the frequency is to be measured.
     
  15. origin Heading towards oblivion Valued Senior Member

    Messages:
    11,888
    Handsa apparently wants to leave no doubt that he is confused so he wrote:
    Frequency is the reciprocal of the period. That is all there is to it.
     
  16. NotEinstein Valued Senior Member

    Messages:
    1,986
    So you stick to your claim (without providing any proof for it, even after having repeatedly been asked for that) that something that is rotating around an axis must do so with at least 1 full rotation per picosecond. That means that either the Earth isn't rotating, or a full sunrise-sundown-sunrise day lasts at most a picosecond... As that seems to be quite fringy to me, I guess it's time this entire discussion is split off of this thread, and moved to pseudo-science, where it clearly belongs?
     
  17. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Ok, this thread isn't going anywhere fast (or it seems, at any speed less than that).

    I'll hand over to yet another link to a book written by none other than Howard Georgi, he of the Georgi-Glashow SU(5) "Grand Unified Theory".

    Reading through just the first chapter though, it (the book) seems to go pretty fast. Irreducible representations appear on page 5, for instance. I did a one semester course on abstract algebra (what they let you study after 2 years of linear algebra), and we didn't really get into continuous groups much, or representation theory either.

    So just a heads up; this online book is not for beginners. If you want some insights into Yang-Mills and so on, then this book is the territory.
     
  18. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    And you still don't understand why, do you?

    Let me explain again. This is a discussion forum. That means that people come here to discuss ideas, mostly in science, and try to contribute their own knowledge. It is not an internet link exchange, still less is it a place to paraphrase poorly understood random articles from the 'net.

    Yet more proof that you just don't get it.

    You effectively killed thIs thread for this reason.
     
  19. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    I think not only is this thread dead, the entire forum is pretty much. There isn't much science being discussed, and we have people being condescending at best.

    For example:
    . Say what? I'm supposed to believe sciforums has a poster who's the ultimate arbiter, I don't have to look anywhere else? What exactly gives this poster the idea I'm ignoring them anyway? I really don't get it.

    "Poorly understood random articles", my ass.
     
  20. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    .

    The set of n x n matrices is a subset of the set of all m x n matrices. I can easily define an additive group of n x n matrices over the integers, say: \( (\{M_{ij}| \delta_{ij} = 1, M_{ab} \in \mathbb {Z}\},\; +) \). The identify of this group is the n x n zero matrix (0-matrix). There is such a thing.

    And, \( detM = e^{trX} \) when X is the additive identity, i.e. \( e^X = I \); this is trivially true because all the terms in the Taylor expansion except the first, are the zero matrix.
     
    Last edited: Mar 3, 2018
  21. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Oops, that should be the set: \(\{M_{ij}| \delta_{ii} = n, M_{ab} \in \mathbb {Z}\}\), where n is the dimension of any M. That means I interpret \( \delta_{ii} \) as a sum over equal indices.

    Another way I could do it is to restrict i, j, and a, b to belong to the same index set.
     
  22. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    So, any matrix can be decomposed into a sum of nonzero matrices, and we can easily see that, for a 2 x 2 matrix

    \( \begin{pmatrix} a & b \\ c & d \end{pmatrix} = a\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + b\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} + c\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} + d\begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} \)

    We can also then write: \( \begin{pmatrix} a & b \\ c & d \end{pmatrix} = a|1\rangle + b|2 \rangle + c[3\rangle + d|4\rangle \), where \( |k\rangle \) is a vector. The coefficients might be integers, real or complex numbers, just about anything.

    We could have that a 2 x 2 matrix is a rotation matrix,

    \( R = \begin{pmatrix} cos\phi & -sin\phi \\ sin\phi & cos\phi \end{pmatrix} = cos\phi (|1\rangle + |4\rangle) + sin\phi(|3\rangle - |2\rangle) \).

    Now I can, with a change of notation, have a slightly different looking map (another homomorphism from the "vector" space to itself), viz:

    \( R = cos\phi (|1\rangle + |2\rangle) + sin\phi(|\bar{1}\rangle - |\bar{2}\rangle) \).

    Looks more, erm, symmetric but it's the same representation.

    Let's take an even simpler subset of this space, by restricting \( \phi \) to be an element of \( \{\pi/2\,\; \pi,\; 3\pi/2(=-\pi/2),\; 2\pi\} \); the rotational symmetries of a square.
     
    Last edited: Mar 4, 2018
  23. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Now, just for the hell of it, show that you can find a map from a space of unit vectors to a second space with the same 'structure', in that, in each of the spaces there is a way up to a scaling factor to join together n points with n unit vectors, and lets make the first space \( \mathbb Z^2 \) embedded in \( \mathbb E^2 \). This means there is a way to find a pair of unit vectors which are orthogonal, which means their inner product is 0 (the 1 x 1 zero matrix).

    In the second space, there is half the angle between orthogonal directions, i.e. there is a map from \( \phi \) to \( \phi/2 \). A continuous map.

    It's a matter of notation as to how this map can be represented, but we know what it looks like. It's a rotation matrix. The second \( \mathbb Z^2 \) maps the circle to half of itself so there's enough "room" for two copies in a full circle, or in other words, a function that wraps the circle twice (if a single wrapping is defined as "once around", which is all the points within the span of a rotating unit vector. In \( \mathbb Z^2 \), a line rotated away from say, (x,y) = (1,0) in a positive sense, will intersect another point in a square lattice which is "as far away from the identity" as you want it to be, well before the angle of rotation reaches a few degrees, there will be some point in the row of points corresponding to y = 1 that intersects the projected line.
     

Share This Page