A little gem of linear algebra

Discussion in 'Physics & Math' started by rpenner, Aug 25, 2015.

  1. rpenner Fully Wired Valued Senior Member

    Messages:
    4,833
    Let \(M(m)\) be a \(m\times m\) lower triangular matrix where \(\left[ M(m) \right]_{i,j} = \left[ i \geq j \right] { i \choose {j -1} } \). Let V(m) be a column vector of powers of k from \(k^0\) to \(k^{m-1}\).

    Code:
    V[m_]  :=  Module[ {i,j}, Table[k^(i-1), {i, 1, 6}, {j, 1, 1}]]
    M[m_] := Module[ {i,j}, Table[ Boole[ i >= j ] * Binomial[i, j - 1] , {i, 1, m}, {j, 1, m}]] 
    Then we have part of Pascal's triangle in this matrix. We can consider \(M(m)\) as the upper-left \(m\times m\) corner of an extended matrix, \(M\).

    \( M = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & \dots \\ 1 & 2 & 0 & 0 & 0 & 0 & \dots \\ 1 & 3 & 3 & 0 & 0 & 0 & \dots \\ 1 & 4 & 6 & 4 & 0 & 0 & \dots \\ 1 & 5 & 10 & 10 & 5 & 0 & \dots \\ 1 & 6 & 15 & 20 & 15 & 6 & \dots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}\)

    Because this is a triangular matrix, the formulas for the determinant and trace are equally simple.
    \(\textrm{det} \, M(m) = \prod_{i=1}^{m} M_{i,i} = \prod_{i=1}^{m} { i \choose {i -1} } = m!, \\ \textrm{tr} \, M(m) = \sum_{i=1}^{m} M_{i,i} = \sum_{i=1}^{m} { i \choose {i -1} } = \frac{m^2 + m}{2}\).

    In fact, because it is a triangular matrix, the eigenvalues are exactly the integers 1..m.

    And \(M(m) V(m) \) is a column vector of polynomials in k. From the binomial theorem, it is easy to equate \(\left[ M(m) V(m) \right]_i = (k + 1)^i - k^i\).

    And because matrix multiplication is linear, it follows that
    \( n^i = \sum_{k=0}^{n-1} \left( (k + 1)^i - k^i \right) = \sum_{k=0}^{n-1} \left[ M(m) V(m) \right]_i = \left[ M(m) \sum_{k=0}^{n-1} V(m) \right]_i \)
    or
    \( n^i = \sum_{k=0}^{n-1} \sum_{j=1}^{i} { i \choose {j -1} } k^{j-1} = \sum_{j=1}^{i} { i \choose {j -1} } \sum_{k=0}^{n-1} k^{j-1} \)

    Now \(f(j) = \sum_{k=0}^{n-1} k^{j-1}\) has it's own history.

    \( \begin{array}{c|c} j & f(j) \\ \hline \\ 1 & n \\ 2 & \frac{n^2}{2} - \frac{n}{2} \\ 3 & \frac{n^3}{3} - \frac{n^2}{2} + \frac{n}{6} \\ 4 & \frac{n^4}{4} - \frac{n^3}{2} - \frac{n^2}{4} \\ 5 & \frac{n^5}{5} - \frac{n^4}{2} + \frac{n^3}{3} - \frac{n}{30} \\ 6 & \frac{n^6}{6} - \frac{n^5}{2} + \frac{5 n^4}{12} - \frac{n^2}{12} \end{array} \)

    Now learning how to prove these formulas have long been staples of the early education of mathematicians, but they are often presented as goals without traces of how to find them. But here we have a method from linear algebra to let us read them off.

    First we have the suggestion that the inverse to the matrix \(M(m)\) is involved.

    \( \begin{array}{c|c} n^i & \sum_{j=1}^{i} { i \choose {j -1} } f(j) \\ \hline \\ n^1 & f(1) \\ n^2 & f(1) + 2 f(2) \\ n^3 & f(1) + 3 f(2) + 3 f(3) \\ n^4 & f(1) + 4 f(2) + 6 f(3) + 4 f(4) \\ n^5 & f(1) + 5 f(2) + 10 f(3) + 10 f(4) + 5 f(5) \\ n^6 & f(1) + 6 f(2) + 15 f(3) + 20 f(4) + 15 f(5) + 6 f(6) \end{array} \)

    Just as \(M(m)\) can be read as the upper-left corner of an extended matrix, the same is true of it's inverse.

    \( M^{-1} = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & \dots \\ - \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 & 0 & \dots \\ \frac{1}{6} & - \frac{1}{2} & \frac{1}{3} & 0 & 0 & 0 & \dots \\ 0 & \frac{1}{4} & - \frac{1}{2} & \frac{1}{4} & 0 & 0 & \dots \\ - \frac{1}{30} & 0 & \frac{1}{3} & - \frac{1}{2} & \frac{1}{5} & 0 & \dots \\ 0 & - \frac{1}{12} & 0 & \frac{5}{12} & - \frac{1}{2} & \frac{1}{6} & \dots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}\)
     
    Last edited: Aug 25, 2015
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. rpenner Fully Wired Valued Senior Member

    Messages:
    4,833
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. rpenner Fully Wired Valued Senior Member

    Messages:
    4,833
    Code:
    S[m_] := Module[{i, j}, Table[(-1)^(1+i) StirlingS2[i, j] , {i, 1, m}, {j, 1, m}]]
    J[m_] := Module[{i, j}, Table[Boole[i == j] * i , {i, 1, m}, {j, 1, m}]]
    InvJ[m_] := Module[{i, j}, Table[Boole[i == j] / i , {i, 1, m}, {j, 1, m}]]
    InvS[m_] := Module[{i,j}, Table[  (-1)^(1+i) * Abs[StirlingS1[i, j]] , {i, 1, m}, {j, 1, m}]]
    
    If I have this right, the left eigenvalues of M(m) are the columns of S(m) and M(m)^n = S(m) J(m)^n InvS(m).

    http://mathworld.wolfram.com/StirlingNumberoftheFirstKind.html
    http://mathworld.wolfram.com/StirlingNumberoftheSecondKind.html
     
    Last edited: Aug 25, 2015
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. rpenner Fully Wired Valued Senior Member

    Messages:
    4,833
    \( \left[ V(m) \right]_n = k^{n-1} \\ \left[ InvS(m) V(m) \right]_n = (-1)^{1+n} \frac{(k+n-1)!}{k!} \\ \left[ J(m) InvS(m) V(m) \right]_n = (-1)^{1+n} n \frac{(k+n-1)!}{k!} \\ \left[ M(m) V(m) \right]_n = \left[ S(m) J(m) InvS(m) V(m) \right]_n = (k+1)^n - k^n \)

    Well, that's a comfort, I factored it correctly.

    Let \( \left[ W(m,c) \right]_n = \left[ S(m) \right]_{n,c} = (-1)^{1+n} \textrm{StirlingS2}[n, c] \)
    Then
    \( \left[ InvS(m) W(m,c) \right]_n = \sum_{j=c}^{m} (-1)^{n+j} \left| \textrm{StirlingS1}[n, j] \right| \textrm{StirlingS2}[j, c] = \delta_{n,c} = [ n == c ] \)
    which again follows from InvS and S being matrix inverses. So
    \( \left[ J(m) InvS(m) W(m,c) \right]_n = c \delta_{n,c} \\ \left[ M(m) W(m,c) \right]_n = \left[ S(m) J(m) InvS(m) W(m,c) \right]_n = c \left[ S(m) \right]_{n,c} = c \left[ W(m,c) \right]_n \).

    So the columns of S(m) are the left eigenvectors of M(m). Yay.
     

Share This Page