Why does a P with equal probabilities for all pages fail to reproduce the eigenvalues?

Course 1, Week 4

This programming assignment is an oversimplified simulation of the page rank algorithm. It uses matrix multiplications and the principle of the Markov matrix. You can predict the future probability of being redirected to each page if you have the vectors of the current probabilities and the probabilities one step behind. The matrix P will be the only unknown in the equation that describes the eigenvalue of the Markov Matrix P (i.e. 1), X_{m} and X_{m-1}.

When I give all pages an equal probability of being linked to in P, the Exercise 4 check fails. The multiplication result is different from the Eigenvector. Despite that, the auto-grader still assigned me the same score as when I assigned different and random ranks to the pages. Why is the multiplication result different from the eigenvector just because all pages have equal probability according to P? I swap the two versions of P and the exercise fails on the equal probabilities but passes the random ones.

It would be helpful if you provide some concrete examples.

These are the two versions of the array of probabilities P that I swap with each other:

P = np.array([    
    [0, .25, .25, .25, .25],
    [.25, 0, .25, .25, .25],
    [.25, .25, 0, .25, .25],
    [.25, .25, .25, 0, .25],
    [.25, .25, .25, .25, 0],

# P = np.array([    
#     [0, .20, .25, .30, .1],
#     [.1, 0, .25, .3, .5],
#     [.25, .25, 0, .3, .15],
#     [.30, .25, .20, 0, .25],
#     [.35, .30, .3, .10, 0],
# ])

This is the array of pages:
X0 = np.array([[0], [0], [0], [1], [0]])

This is the explanation of the principle:
A square matrix is called a Markov matrix if all entries are nonnegative and the sum of each column elements is equal to 1. Markov matrices have a handy property - they always have an eigenvalue equals to 1.

Matrix P was defined in such a way, that it is actually a Markov matrix, and you can see that it has an eigenvalue 1. The equation X_m=PX_{m-1} can be rewritten as PX_{m-1}=1\times X_m. Predicting probabilities in X_m when m is large you can actually look for an eigenvector corresponding to the eigenvalue 1, because then you will get PX = X.

When I give the matrix P different random probabilities that equal one along the columns, the test in Exercise 4 passes.

Original eigenvector corresponding to the eigenvalue 1:
[-0.38756469+0.j -0.5087927 +0.j -0.42633941+0.j -0.44387926+0.j
 -0.4605752 +0.j]
Result of multiplication:[-0.38756469+0.j -0.5087927 +0.j -0.42633941+0.j -0.44387926+0.j
 -0.4605752 +0.j]
Check that PX=X element by element:[ True  True  True  True  True]

When I use the matrix P with 0.25 probability for all pages, I get this failed output:

Original eigenvector corresponding to the eigenvalue 1:
[-0.89442719  0.2236068   0.2236068   0.2236068   0.2236068 ]
Result of multiplication:[ 0.2236068 -0.0559017 -0.0559017 -0.0559017 -0.0559017]
Check that PX=X element by element:[False False False False False]


Thanks for posting the example.

I’m not a subject matter expert on this topic, so I prefer to wait and hope that someone with higher skills will reply.