MATH: Markov chains & Regular Markov chains

Discussion in 'Physics & Math' started by kingwinner, Jun 3, 2006.

  1. kingwinner Registered Senior Member

    Messages:
    796
    1) For the following transition matrice will the Markov chain be regular? Why?

    1a)
    [1 0
    0 1]
    2x2 matrix

    1b)
    [0.1 0.6 0.3
    0.33 0.3 0.37
    0.5 0 0.5]
    3x3 matrix

    2) Gina noticed that the performance of her baseball team seemed to depend on the outcome of their previous game. When her team won, there was a 70% chance that they would win the next game. If they lost, however, there was only a 40% chance that they would win their next game.
    2a) Following a loss, what is the probability that Gina's team will win two games later? Use the Markov chains to find out. [the answer provided is 0.52, but no matter what I try, I can't get the answer]

    Can someone please explain the steps to solve these problems? I am struggling with these 2 questions. Thank you!
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Physics Monkey Snow Monkey and Physicist Registered Senior Member

    Messages:
    869
    Hi kingwinner,

    You bring up quite an interesting range of topics here!

    1) This is a simple question of definition, so what is the definition of a regular Markov chain? You should find that one Markov chain is pretty clearly regular and the other is obviously not regular.

    2) You need to write out the transition matrix M. Do you know how to do this? Once you have the transition matrix, the probabilities of interest will be given by M^2 since you want to look two games ahead. Which element of M^2 gives you the probability you want?
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. kingwinner Registered Senior Member

    Messages:
    796
    Hi, Physics Monkey,

    1) Both transition matrix contain "0" entries, so I guess both are NOT regular Markov chains. Why is one of them regular?

    2) S^(2) = S^(0) P^2 where S^(0) is the initial probability matrix, S^(2) is the 2nd-step matrix, P is the transition matrix.
    My problem is that I don't know what the initial probability matrix, S^(0), is...
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Physics Monkey Snow Monkey and Physicist Registered Senior Member

    Messages:
    869
    kingwinner,

    1) A Markov chain is regular if all the entries of P^n are eventually nonzero for some n. In other words, if you can multiply the transition matrix by itself enough times so that the elements are all non-zero, then the Markov chain is regular. Clearly matrix a) is not regular since you'll always have zeros. But what about matrix b)? You are right that one element is zero, but does that zero survive? Have you tried to see what P^2 is?

    2) You're asked to find the probability of a win two games down the road provided you lost this game. If you definitely lost this game, what is your initial state?

    P.S. Why are you using right stochastic matrices? Don't you find that odd?
     
  8. kingwinner Registered Senior Member

    Messages:
    796
    1) "Regular Markov chains always achieve a steady state. A markov chain is regular if the transition matrix P or some power of P has no zero entries." This is the defintiion from my textbook.

    So we indeed have to test every power of P and see if P^2, P^3, etc have zero entreis or not. How do we know when to stop testing? If P contains "0" entries, P^2, P^3,P^4 contains "0" entries, who says that P^5 will not be the one with no "0" entries? We can never conclude that a markov chain is non-regular without testing powers of P until infinity.......

    2) S(n) = S(0) P^n, this is the fomula given in my textbook for Markov chians.

    The transition matrix P=
    [0.7 0.3
    0.4 0.6]

    The thing is that I don't know what the initial probability vector S(0) is....please help me...
     
  9. draqon Banned Banned

    Messages:
    35,006
    how bad is it...if I have no clue...of what u guys are talking about?
     
  10. przyk squishy Valued Senior Member

    Messages:
    3,203
    Yeah. What in the blazes is a Markov chain?
     
  11. Physics Monkey Snow Monkey and Physicist Registered Senior Member

    Messages:
    869
    kingwinner,

    1) The definition does seem to imply that you might have to go on testing forever. However, you will often find that either zeros vanish quickly or they are obviously not going to go away (as in matrix a). With a little playing around you will find that it is quite hard to keep the zeros from disappearing (again, except in very special looking matrices like a).

    2) You are starting from a state of certainty. You know that you DEFINITELY lost the game you just played, so what is your initial state?
     
  12. Physics Monkey Snow Monkey and Physicist Registered Senior Member

    Messages:
    869
    draqon and przyk,

    It's nothing very complicated. A Markov process is a random process with "no memory," and that just means that the probabilities at each step depend only on the probabilities at the previous step and nothing else. Take kingwinner's example: you can either win or lose each game you play, and the probabilities depend only on how you did in the previous game. Suppose you start the season off with a win, then your initial state is just the vector (1,0). Your goal is to answer the question, "What is the probability of a win (loss) after n games?" To answer this question you have to construct what is called a transition matrix which tells you how to go from one step to the next. The first element is the probability to win if you just won, the second element is the probability to win if you just lost, etc. Using P(W->W) = .7, P(W->L) = .3, P(L->W) = .4, P(L->L) = .6 you can find:
    P = [ .7 .4
    .3 .6 ]
    where P is the transition matrix.

    Now you can have fun with it. For example, if we start the season with a win, then our state after one iteration is (.7,.3) (just matrix multiplication). If P is the transition matrix, you have the general formula S(n) = P^n S(0) where S is your state vector. A Markov chain is basically just this infinite chain of probabilites. Of course, there are all kinds of extensions and so forth one can play with.
     
  13. kingwinner Registered Senior Member

    Messages:
    796
    1) So basically to test it, just go up a few powers and see if the zeros disappears.

    2) Would the initial probability vector be [0.5 0.5], equal chance of winning and losing??
     
    Last edited: Jun 4, 2006
  14. przyk squishy Valued Senior Member

    Messages:
    3,203
    Hi,

    Looks interesting. I'd never seen matrices used in probability theory before. Thanx. I think I more or less get it now.
     
  15. przyk squishy Valued Senior Member

    Messages:
    3,203
    That seems to be what PM is suggesting.
    No. You're given that the team lost the last game.
     
  16. kingwinner Registered Senior Member

    Messages:
    796
    2) S(0) = [0.4 0.6]

    P=
    [0.7 0.3
    0.4 0.6]

    Two games later is actually S(1) because S(0) corresponds to the next game (following the loss).

    So S(1)=S(0)P=[0.52 0.48]
    Therefore, following a loss, the probability that Gina's team will win two games later is 0.52 or 52%

    Please point me out if my steps are wrong. Is S(1)=S(0)P the right thing to do?
     
  17. quadraphonics Bloodthirsty Barbarian Valued Senior Member

    Messages:
    9,391
    You can determine the regularity of a Markov chain by computing the eigenvalues of the transition matrix. The largest eigenvalue will always be exactly equal to 1. If that eigenvalue is not repeated, the Markov Chain is regular (or irreducible). If there is more than 1 eigenvalue equal to 1, the Markov Chain is irregular (or reducible).
     
  18. Physics Monkey Snow Monkey and Physicist Registered Senior Member

    Messages:
    869
    kingwinner,

    The initial state should be S(0) = [0 1] because you definitely lost that game. It's as simple as that. In this case, the win probability for two games from now is given by the first element of S(0) P^2. What you have done is equivalent as you can check. You're simply saying S(1) = S(0) P = [.4 .6] which is ok, and then you calculate S(2) = S(1) P = S(0) P^2 = [.52 .49]. I think you might want to practice this argument a little bit since you had some trouble with it. Hope this helps.
     
  19. Physics Monkey Snow Monkey and Physicist Registered Senior Member

    Messages:
    869
    Oh yeah, quadraphonics is certainly correct. The only thing I would say is that it is often easier to just watch the zeros vanish by multiplying a few times. Finding the eigenvalues of a large matrix can be time consuming, and so other simpler ways of telling are often convenient. Once you know that the unique steady state exists, it becomes easy to find it.
     

Share This Page