In W4โs lab, the lecturer says โPredicting probabilities in ๐๐ when ๐ is large you can actually look for an eigenvector corresponding to the eigenvalue 1 , because then you will get ๐๐=๐โ. I donโt quite understand it, does it mean PXinf should equal to Xinf ? Why this is the case?
@JerryLee thank you for the question. Have a look at the equation PX_{m-1} = X_{m}. Matrix P has an eigenvalue 1 for sure, so if X is an eigenvector corresponding to the eigenvalue 1, then PX = X. Now have a look at both of those equations:
PX_{m-1} = X_{m}
PX = X
You are interested in the probabilities of the browser after infinite steps of navigation, so for large m, such a vector X_{m-1} which will not change much with the transformation P. And that corresponds to the equation PX = X which is the same as an equation for an eigenvector corresponding to the eigenvalue 1.
I donโt mentor this course so I have no access to the lab, but with the info in this thread, I can share a few points as the direction for you to figure the rest out yourself.
Considering a positive Markov matrix P,
Any vector x may be written as a linear combination of the eigen vectors of P
Eigen values of P are -1 < lambda <= 1.
Applying P to x shrinks all of xโs eigen vector components except the component with eigen value of 1. (Why shrink? Think about the eigen values.)
Repeatedly applying P keeps shrinking.
Infinitely repeatedly applying P zeros out all components except for the component with the eigen value of 1.
Please try to follow these ideas and derive the equation in question.