In article <firstname.lastname@example.org>, RichD <email@example.com> wrote:
> As I understand it, a Markov process is a probabilisitic > sequence of state transiitons. It's completely specified > by the state transition matrix. > > However, I sometimes see references to the > equlibrium condition, whether it's been reached, > etc. I don't get this, can anyone elaborate? ....
Example (using row vector notation). Transition matrix A =
(1/3 2/3) (1/2 1/2).
Suppose the system has state vector (x y), i.e. there is probability x that it is in the first state, and y for the second. Then the next step of the process takes it to (x y)A. If you choose some initial probabilities x and y, and iterate through quite a few steps, you will find the state vector getting closer and closer to (3/7 4/7). If the systen were actually in that state, then the next step would take it to (3/7 4/7)A which is again equal to (3/7 4/7). (Work it out!)
(3/7 4/7) describes the equilibrium state, and this particular process converges steadily towards that state. (Some others don't.) You may be able to see that the equilibrium state vector is an eigenvector of A corresponding to the eigenvalue 1.