# C1_W3_Assignment compute_pca()

Link to the classroom item you are referring to: Coursera | Online Courses & Credentials From Top Educators. Join for Free | Coursera

Description:
I have a few questions regarding the graded assignment. Why do we need to sort eigen_values since it’s not being used? The indices from np.argsort(eigen_vals) is directly applied to eigen_vecs matrix, which then is used to transform input matrix X. Why do we need to sort eigen_values as well?

The point of the PCA algorithm is that you want to reduce the dimensions by removing the dimensions that are the least meaningful. Think about what the eigenvalues and eigenvectors mean: the eigenvectors form a basis of the transformation and each one is a vector in the direction of which the transformation is \lambda_i * e_i. So that gives you the information you need: the larger the magnitude of the eigenvalue, the more meaningful is that dimension in the transformation. So you want to remove the dimensions starting with the smallest eigenvalues. I’ve never watched the lectures in NLP C3, but what I’m saying here is what I learned from Prof Ng when he discussed PCA in the original Stanford Machine Learning course. I would hope they would mention that in the lectures here.

Update: sorry, I probably missed the point of your question on the first pass. Yes, the argsort will accomplish what you really need, so it’s not clear why they would also care about having the eigenvalues themselves sorted as a separate thing.