What is the result of two matrices?

Alireza has explained what is going on here, but it’s also worth noting that what you show above is not the operation being done here. Where do you see that?

We have used SVD to do PCA (Principal Component Analysis) as Alireza described. Now that we have the resulting eigenvectors in descending order of their expressiveness, we can experiment with the effect of reducing the dimensions to a certain number of the principal components in the transformed space and seeing how accurate the reconstructed image is. The operation to do the projection back into the original image space to create the reconstructed image is:

X_{reduced} \cdot eigenvec_{reduced}^T

Here’s the function given to us in the notebook to do that operation:

def reconstruct_image(Xred, eigenvecs):
    X_reconstructed = Xred.dot(eigenvecs[:,:Xred.shape[1]].T)

    return X_reconstructed

PCA is pretty deep waters from a mathematical standpoint, so it might be worth reading through all the explanations in the assignment again carefully. In fact someone recently made a joke about this on the forum.