# C1W4_Assignment Exercise 5

Hi, it seems like this should be fairly straight forward. Multiply the eigenvectors by the original matrix, and divide by the norm. However no matter how many variations I attempt it does not go through. View code below. I have tried many variations for the shapes.

{moderator edit - solution code removed}

What does np.linalg(V) give you? That is a class, not a function. If you want the norm, that would be np.linalg.norm(V), right?

But I don’t see any reference to norms of anything in implementing the perform_PCA function. Where do you see that in the instructions? For the purposes of perform_PCA, you can assume that the input data is already centered.

I have already centered my data, i just need to transform the centered data with PCA. The instructions literally say " Perform dimensionality reduction with PCA" which is not helpful - it does not explain how. I do think we may be able to omit the use of norm in general. Is there somewhere I can see a reference of this being done?

{moderator edit - solution code removed}

Is variable V on the right track? Xred?? What does xRed stand for?

Here is the first paragraph of the instructions for section 2.4:

Now that you have the first 55 eigenvalue-eivenvector pairs, you can transform your data to reduce the dimensions. Remember that your data originally consisted of 4096 variables. Suppose you want to reduce that to just 2 dimensions, then all you need to do to perform the reduction with PCA is take the dot product between your centered data and the matrix 𝑽=[𝑣1𝑣2]𝑉=[𝑣1𝑣2], whose columns are the first 2 eigenvectors, or principal components, associated to the 2 largest eigenvalues.

It seems pretty clear in what it is telling you to do:

Select the first k columns of the eigenvector matrix. Then simply perform this dot product:

Xred = X \cdot eigenvecRed

The name Xred probably means “X reduced”.

{moderator edit - solution code removed}

This is what I am doing. Am I missing something? Doing my best here

The first error I am getting is shape error so I tried transposing it around. Error is below: ValueError: shapes (55,4096) and (2,55) not aligned: 4096 (dim 1) != 2 (dim 0)

Also note that both sets of logic you showed, you are reducing the eigenvectors in the rows dimension, which is not what was intended. You do a transpose, but that happens after the reduction by the indexing, right?

That sounds promising, trying to reduce by column instead of row now. [,:2] doesnt work so looking for alternative syntax. Thanks for your help on this so far

What you are missing is the point I made about indexing. eigenvecs is a matrix. You are indexing the first dimension of it, which is the rows dimension. That will give you the first k rows of the matrix. You want the first k columns, right?

yes columns correct

Ok, so how do you say that in python?

eigenvecs[:,:2]

Right! Give that a try but use k, of course, since we’re writing general code here.

1 Like

Got it!!! Wow, thanks!

1 Like