Dear arvyzukai, the mentor:

Hello, again!

I have questions about the assignment for Week 3.

In the definition of the function for PCA, I’m really confused about the dimension of some matrices.

For example, the matrix X has the dimension of 3x10, so I think the number of observations is 3

and the number of features is 10. However, the dimension of its covariance matrix is 3x3.

If the number of features is 10, the dimension of the covariance matrix should be 10x10.

Do I have to transpose the input matrix X to a 10x3 matrix?

I also cannot understand the dimension of the returned eigen vector matrix.

What is the dimension of the eigen vector matrix “eigen_vecs”?

If the number of principal components is 2, it should be 3x2.

I mean the number of rows is # of observations (i.e., 3) and the number of columns corresponds

to # of principal components (i.e., 2).

In fact, the problem on my code derives from the inconsistent dimension in computing

the dimension-reduced matrix “X_reduced.” This calculation is quite confusing to me.

I can’t see which matrix must be transposed to compute the dot product of two matrices.

The following is my Python code. Could you give me some advice to modify it?

Best wishes,

Setsuro Matsuda

------------------------------ My Python code ------------------------------

```
def compute_pca(X, n_components=2):
"""
Input:
X: of dimension (m,n) where each row corresponds to a word vector
n_components: Number of components you want to keep.
Output:
X_reduced: data transformed in 2 dims/columns + regenerated original data
pass in: data as 2D NumPy array
"""
### START CODE HERE ###
# mean center the data
X_demeaned = X - np.mean(X, axis=0)
# calculate the covariance matrix
covariance_matrix = np.cov(X_demeaned)
# calculate eigenvectors & eigenvalues of the covariance matrix
eigen_vals, eigen_vecs = np.linalg.eigh(covariance_matrix)
# sort eigenvalue in increasing order (get the indices from the sort)
idx_sorted = np.argsort(eigen_vals)
# reverse the order so that it's from highest to lowest.
idx_sorted_decreasing = idx_sorted[::-1]
# sort the eigen values by idx_sorted_decreasing
eigen_vals_sorted = eigen_vals[idx_sorted_decreasing]
# sort eigenvectors using the idx_sorted_decreasing indices
eigen_vecs_sorted = eigen_vecs[idx_sorted_decreasing, :]
# select the first n eigenvectors (n is desired dimension
# of rescaled data array, or dims_rescaled_data)
eigen_vecs_subset = eigen_vecs_sorted[:, 0:n_components]
# transform the data by multiplying the transpose of the eigenvectors with the transpose of the de-meaned data
# Then take the transpose of that product.
X_reduced = np.dot(X_demeaned.T, eigen_vecs_subset).T
### END CODE HERE ###
return X_reduced
```