About Markov matrix

Hi,

In W4โ€™s lab, the lecturer says โ€œPredicting probabilities in ๐‘‹๐‘š when ๐‘š is large you can actually look for an eigenvector corresponding to the eigenvalue 1 , because then you will get ๐‘ƒ๐‘‹=๐‘‹โ€. I donโ€™t quite understand it, does it mean PXinf should equal to Xinf ? Why this is the case?

Hi @JerryLee!

@esanina, can you give a hand here? Thanks!

@JerryLee thank you for the question. Have a look at the equation PX_{m-1} = X_{m}. Matrix P has an eigenvalue 1 for sure, so if X is an eigenvector corresponding to the eigenvalue 1, then PX = X. Now have a look at both of those equations:
PX_{m-1} = X_{m}
PX = X
You are interested in the probabilities of the browser after infinite steps of navigation, so for large m, such a vector X_{m-1} which will not change much with the transformation P. And that corresponds to the equation PX = X which is the same as an equation for an eigenvector corresponding to the eigenvalue 1.

Hi Esanina,

Why Xm wonโ€™t change much with P when m is large?

@esanina

Hello @JerryLee,

I donโ€™t mentor this course so I have no access to the lab, but with the info in this thread, I can share a few points as the direction for you to figure the rest out yourself.

Considering a positive Markov matrix P,

  1. Any vector x may be written as a linear combination of the eigen vectors of P

  2. Eigen values of P are -1 < lambda <= 1.

  3. Applying P to x shrinks all of xโ€™s eigen vector components except the component with eigen value of 1. (Why shrink? Think about the eigen values.)

  4. Repeatedly applying P keeps shrinking.

  5. Infinitely repeatedly applying P zeros out all components except for the component with the eigen value of 1.

Please try to follow these ideas and derive the equation in question.

Good luck, and cheers,
Raymond