Eigenfunctions

Discussion in 'Physics & Math' started by Samidha, Feb 12, 2017.

  1. Samidha Registered Member

    Messages:
    5
    What do you mean by Eigenfunctions,Eigen values. How can to practically relate it to get more into quantum mechanics.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. rpenner Fully Wired Staff Member

    Messages:
    4,833
    When you have a linear operator, A, which operates on function, |ψ>, you get a new function |Aψ>.
    It would be nice if you could manipulate |Aψ> with some sort of shorthand.
    Let us assume that we have an indexed basis set of orthogonal functions |f_i> and their corresponding duals <F_i| so that for any index, i, <F_i|f_i> = 1 and for distinct indexes i , j <F_i|f_j> = 0 when i ≠ j. There are many ways one can make orthogonal functions.
    Then if A can be written as A = ∑ |f_i> a_i <F_i| where a_i is just a number, a_i is the ith eigenvalue of A and the set of |f_i> form the (left) eigenfunctions.

    For A |f_j> = ∑ |f_i> a_i <F_i| f_j> = ∑(i=j) |f_i> a_i <F_i| f_j> + ∑(i ≠ j) |f_i> a_i <F_i| f_j> = |f_j> a_j <F_j| f_j> + 0 = a_j |f_j>.
    When |ψ> is an eigenfunction, the linear operation A acts just like multiplying by a number.

    In quantum mechanics, if A is the Energy operator, then since energy is conserved, the energy eigenfunctions are the states which correspond to a system with known energy. One of these has a lowest energy and is the ground state.

    This is in analogy with vectors.
    Say
    \(A = \begin{pmatrix} 192 & 120 & 0 & 240 & -120 \\ -780 & -378 & -180 & -1140 & 900 \\ -180 & -30 & -108 & -300 & 300 \\ 10 & -35 & -30 & -118 & 50 \\ -460 & -220 & -240 & -920 & 772 \end{pmatrix}\)
    Then
    \( \textrm{det} ( A - I x) = -x^5 + 360 x^4 + 38160 x^3 - 2747520 x^2 - 134369280 x + 1934917632 = -(x - 12) (x + 48) (x - 72) (x + 108) (x - 432)\) with roots corresponding to the eigenvalues.

    \(A = \begin{pmatrix} f_1 & f_2 & f_3 & f_4 & f_ 5 \end{pmatrix} \begin{pmatrix} a_1 & & & & \\ & a_2 & & & \\ & & a_3 & & \\ & & & a_4 & \\ & & & & a_5 \end{pmatrix} \begin{pmatrix} F_1 \\ F_2 \\ F_3 \\ F_4 \\ F_ 5 \end{pmatrix} \\ \quad = \begin{pmatrix} -2 & 0 & 0 & 2 & 0\\ 2 & -2 & -2 & -2 & 2 \\ -2 & 1 & -2 & 0 & 1 \\ 2 & 2 & 1 & 1 & 0 \\ 1 & 2 & 0 & 2 & 2 \end{pmatrix} \begin{pmatrix} -108 & 0 & 0 & 0 & 0 \\ 0 & -48 & 0 & 0 & 0 \\ 0 & 0 & 12 & 0 & 0 \\ 0 & 0 & 0 & 72 & 0 \\ 0 & 0 & 0 & 0 & 432 \end{pmatrix} \begin{pmatrix} 1/3 & 1/3 & 0 & 2/3 & -1/3 \\ -1/3 & -5/24 & 1/4 & 1/12 & 1/12 \\ -5/6 & -7/12 & -1/2 & -7/6 & 5/6 \\ 5/6 & 1/3 & 0 & 2/3 & -1/3 \\ -2/3 & -7/24 & -1/4 & -13/12 & 11/12 \end{pmatrix} \)

    So this seems complicated:
    \(A \begin{pmatrix} -26 \\ -8 \\ -9 \\ 60 \\ 47 \end{pmatrix} = \begin{pmatrix} 2808 \\ -1176 \\ 1992 \\ -4440 \\ -3036 \end{pmatrix}\)
    But \(A ( 13 f_1 + 17 f_2 ) = -1404 f_1 - 816 f_2\) seems simpler.

    Also, any power of A can be computed from this diagonal form by raising the diagonal to the appropriate power, which is much less complicated than multiplying matrices repeatedly.

    While this makes matrix multiplication easier, the choice of a eigenbasis for analyzing functions makes everything much easier as a function can be viewed as the analogue of an infinity-dimensional vector.

    In quantum mechanics, we mostly use unitary operators which means F_1 is the complex conjugate (of the transpose) of f_1.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. James R Just this guy, you know? Staff Member

    Messages:
    30,353
    At a slightly simpler level than what rpenner wrote above, let's just talk about matrices for a moment.

    The eigenvectors \(x\) associated with a square matrix \(A\) satisfy the equation:

    \(Ax=\lambda x\)

    where \(\lambda\) is a number called the eigenvalue. Different eigenvectors can have different corresponding eigenvalues.

    Knowing the eigenvectors and eigenvectors of a given matrix allows us to simplify various things, as explained by rpenner, above. For example, we can use them to deal with a simpler, diagonal form of the matrix, which is often useful.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Samidha Registered Member

    Messages:
    5
    Can you explain why are we using Eigen vectors or Eigen value in quantum mechanics what are their physical significance.
     
  8. Samidha Registered Member

    Messages:
    5
    What does a Zero Eigen value mean?
     
  9. James R Just this guy, you know? Staff Member

    Messages:
    30,353
    If a matrix \(A\) has a zero eigenvalue, then the determinant of the matrix is zero. This means, among other things, that the matrix has no inverse.

    To see this, note that if
    \(Ax=\lambda x\)
    then
    \((A- \lambda I)x=0\)
    Now, if the square matrix \(A-\lambda I\) has an inverse then the only vector that satisfies the equation is \(x=0\) (the zero vector). But in general we're interested in non-zero eigenvectors. Therefore we require:
    \(\text{det}(A-\lambda I) = 0\)
    And if \(\lambda=0\) we have \(\text{det}(A)=0\) and therefore \(A\) has no inverse.
     
    Last edited: Feb 13, 2017

Share This Page