What is the result of two matrices?

Hi, everyone
I don’t understand this section.
what is the result of np.dot(eigenvector[:,:k], eigenvector[:,:k].T)?

Hi @13695789309

It is a transformation matrix that projects the data back into the original space after reducing its dimensions using the first k principal components. This reconstructs the data, keeping the most significant features (reduces noise and less important details).

I will provide you some images with different values of k as I have tested this before on Google Scraped Image Dataset:

1 Like

Thanks for your reply!

Since np.dot(eigenvector[:,:k], eigenvector[:,:k].T)is not an identity matrix,

I want to know are there some math principles that can guarantee the

reconstructed matrix is similar to the original matrix and result similar image?

Hi @13695789309,

The mathematics behind this is SVD, where the eigenvectors represent the directions of maximum variance in the data (image). When you project the data onto these k ( k < n ) eigenvectors and then back to the original space, the result approximates the original image. This is because the first k eigenvectors capture the largest variations in the data, and by using them, you keep the most important information so the reconstructed matrix is similar to the original one.

Hope it helps! Feel free to ask if you need further assistance.

1 Like

Alireza has explained what is going on here, but it’s also worth noting that what you show above is not the operation being done here. Where do you see that?

We have used SVD to do PCA (Principal Component Analysis) as Alireza described. Now that we have the resulting eigenvectors in descending order of their expressiveness, we can experiment with the effect of reducing the dimensions to a certain number of the principal components in the transformed space and seeing how accurate the reconstructed image is. The operation to do the projection back into the original image space to create the reconstructed image is:

X_{reduced} \cdot eigenvec_{reduced}^T

Here’s the function given to us in the notebook to do that operation:

def reconstruct_image(Xred, eigenvecs):
    X_reconstructed = Xred.dot(eigenvecs[:,:Xred.shape[1]].T)

    return X_reconstructed

PCA is pretty deep waters from a mathematical standpoint, so it might be worth reading through all the explanations in the assignment again carefully. In fact someone recently made a joke about this on the forum.

@13695789309 @paulinpaloalto

He is pretty big in this math part of the world, but I’ve been going through Strang’s ‘kitty’ text:

https://math.mit.edu/~gs/everyone/

A bit easier to read, and at the end they bring you through PCA.

2 Likes

In addition to Anthony’s link (the Strang Linear Algebra book is one of the most widely respected out there), you can also find a number of lectures on PCA on YouTube. Prof Ng covered this in his original Stanford Machine Learning class and here’s his first lecture on that.

Stanford has put a lot of their ML related graduate CS classes out for public view as well. Here’s the lecture from CS 229 that introduces PCA among other things. CS 229 is one of the courses that Prof Andrew taught at Stanford for many years, but it’s been handed off to other professors now.

1 Like

I have some basic knowledge of linear algebra, but I am not an expert.
I spent a day studying SVD, and now I understand what you explained.
Thank you so much! :grin:

Now I know my code is not standardized. Thanks for your correction and help!

Thank you for your reply. Could you please let me know if there is an ebook I can purchase?

That’s good to hear and you’re welcome! Happy to help :raised_hands:

For this particular book, I am not sure that their is, and admittedly Strang’s books are a little bit pricey-- but he is a major figure in this field.

If you are looking for something cheaper, you might try this book as well, though I have less experience with it:

1 Like