I can't find any solution on UNQ_C2 GRADED FUNCTION: gradientDescent

Hi, guys!

I’ve just started to take this NLP course at the end of August, 2022.
I’ve learned Machine Learning by myself, but I’m not familiar with Deep Learning.
Besides, I usually use “R” for data mining, so I don’t have enough knowledge of “Python” yet.

While I was trying to solve the assignment for Week 1, I came across the Python error below.

ValueError: operands could not be broadcast together with shapes (10,3) (3,1)

I think I made some mistakes for coding the multiplications of matrix or vector.
In particular, my code on “dot product” may have some errors.
I’ll post my code and error on this topic. Could you please give me some advice?

Sorry, everyone.

I solved the above issue by myself.

I have replaced two simple multiplications (i.e. *) and one addition (i.e. +)
with the matrix multiplication (i.e. @) unnecessarily.
I should have carefully checked my code again.

Hi @Setsuro_Matsuda

Please remove your code because it’s against the rules.

Thank You

Dear Arvyzukai, the mentor:

Thank you for your notification.

I tried to remove my topic, but permission was required to do so.

On the day I posted it, the “Edit Button” worked.

But now, I cannot find any active function to remove topics

in the forum site.

Could you please delete my topic with Python codes for me?

Sorry for bothering you.

Yours sincerely,

Setsuro Matsuda

No problem. I removed your code snippets.

Cheers

Dear arvyzukai, the mentor:

Thank you for your quick response.

I’ll send my assignment in the form of HTML in order not to show my code

to anyone else.

I would appreciate it if you could give me some advice or

clue to solve the issue poseted in the forum.

Best regards,

Setsuro Matsuda

(Attachment C1_W2_Assignment.html is missing)

Dear arvyzukai, the mentor:

Thank you for your quick response.

I’ll send my assignment in the form of JPEG in order not to show my code

to anyone else.

I would appreciate it if you could give me some advice or

clue to solve the issue poseted in the forum.

Best regards,

Setsuro Matsuda

{moderator edit - solution code removed}

Dear Arvyzukai:

Thank you for your useful advice.

Last night, I carefully reviewed the instruction for the assignment

and got some clues to solve my issue.

Those clues are the same as what you advised in the email.

I should have thought of solutions again by myself.

Sorry for bothering you, but thanks a lot!

Best wishes,

Setsuro

From: “Arvydas via DeepLearning.AI” notifications@dlai.discoursemail.com

To: “松田 節郎” sema20_04roda@yahoo.co.jp

Date: 2022/10/05 水 01:24

Subject: [DeepLearning.AI] [PM] RE: I can’t find any solution on UNQ_C2 GRADED FUNCTION: gradientDescent

arvyzukaiMentor - NLP

October 4

I can’t find any solution on UNQ_C2 GRADED FUNCTION: gradientDescent NLP Course 1 Week 1

Dear arvyzukai, the mentor: Thank you for your quick response. I’ll send my assignment in the form of JPEG in order not to show my code to anyone else. I would appreciate it if you could give me some advice or clue to solve the issue poseted in the forum. Best regards, Setsuro Matsuda [my_code_for_W2-1.jpg] [my_code_for_W2-2.jpg]

Hi Setsuro

A better way (yours is ok, but this one is more efficient) to get the vocab is:

{moderator edit - solution code removed}

What is wrong is:

{moderator edit - solution code removed}

No problem and don’t forget to enjoy learning :slight_smile: :+1:

Dear arvyzukai, the mentor:

Hello, again!

I have questions about the assignment for Week 3.

In the definition of the function for PCA, I’m really confused about the dimension of some matrices.

For example, the matrix X has the dimension of 3x10, so I think the number of observations is 3

and the number of features is 10. However, the dimension of its covariance matrix is 3x3.

If the number of features is 10, the dimension of the covariance matrix should be 10x10.

Do I have to transpose the input matrix X to a 10x3 matrix?

I also cannot understand the dimension of the returned eigen vector matrix.

What is the dimension of the eigen vector matrix “eigen_vecs”?

If the number of principal components is 2, it should be 3x2.

I mean the number of rows is # of observations (i.e., 3) and the number of columns corresponds

to # of principal components (i.e., 2).

In fact, the problem on my code derives from the inconsistent dimension in computing

the dimension-reduced matrix “X_reduced.” This calculation is quite confusing to me.

I can’t see which matrix must be transposed to compute the dot product of two matrices.

The following is my Python code. Could you give me some advice to modify it?

Best wishes,

Setsuro Matsuda

------------------------------ My Python code ------------------------------

def compute_pca(X, n_components=2):

"""

Input:

X: of dimension (m,n) where each row corresponds to a word vector

n_components: Number of components you want to keep.

Output:

X_reduced: data transformed in 2 dims/columns + regenerated original data

pass in: data as 2D NumPy array

"""

### START CODE HERE ###

# mean center the data

X_demeaned = X - np.mean(X, axis=0)

# calculate the covariance matrix

covariance_matrix = np.cov(X_demeaned)

# calculate eigenvectors & eigenvalues of the covariance matrix

eigen_vals, eigen_vecs = np.linalg.eigh(covariance_matrix)

# sort eigenvalue in increasing order (get the indices from the sort)

idx_sorted = np.argsort(eigen_vals)

# reverse the order so that it's from highest to lowest.

idx_sorted_decreasing = idx_sorted[::-1]

# sort the eigen values by idx_sorted_decreasing

eigen_vals_sorted = eigen_vals[idx_sorted_decreasing]

# sort eigenvectors using the idx_sorted_decreasing indices

eigen_vecs_sorted = eigen_vecs[idx_sorted_decreasing, :]

# select the first n eigenvectors (n is desired dimension

# of rescaled data array, or dims_rescaled_data)

eigen_vecs_subset = eigen_vecs_sorted[:, 0:n_components]

# transform the data by multiplying the transpose of the eigenvectors with the transpose of the de-meaned data

# Then take the transpose of that product.

X_reduced = np.dot(X_demeaned.T, eigen_vecs_subset).T

### END CODE HERE ###

return X_reduced

Hi @Setsuro_Matsuda

First, it is better to start a new forum discussion with appropriate week # instead of continuing discussion from irrelevant week.

Now, to answer your questions:

For example, the matrix X has the dimension of 3x10, so I think the number of observations is 3
and the number of features is 10. However, the dimension of its covariance matrix is 3x3.
If the number of features is 10, the dimension of the covariance matrix should be 10x10.
Do I have to transpose the input matrix X to a 10x3 matrix?

The shape of covariance_matrix is 10x10. As suggested in the “Hints”, you need to pass rowvar=False (see the documentation):

If `rowvar` is True (default), then each row represents a
variable, with observations in the columns. Otherwise, the relationship
is transposed: each column represents a variable, while the rows
contain observations.

What is the dimension of the eigen vector matrix “eigen_vecs”?

The shape of eigen_vecs is 10x10. When you sort this matrix by eigen_vals in decreasing order and take the subset of n_components (in our case 2) you get 10x2 (this is eigen_vecs_subset). These two vectors are the most “imformative” eigenvectors.

Then you transform this eigen_vecs_subset (shape 10x2) with X_demeaned (shape 3x10) to get your principal components X_reduced (shape 3x2). The instructions in the exercise suggest you to do multiple transposes np.dot(eigen_vecs_subset.T,X_demeaned.T).T to get the desired result.

I am not sure why these transposes are suggested because np.dot(X_demeaned, eigen_vecs_subset) is the same result.
Note: np.dot(A.T, B.T).T is the same as np.dot(B, A)

Cheers

Dear Arvyzukai:

Your advice is always helpful to me. Thanks to you, I was able to finish submitting the assignment for Week 3.

Actually, I didn’t know there were some hints on the assignment page.

I misunderstood the shape (i.e., dimension) of the eigen vector matrix returned.

As you mentioned, the last transformation for X_reduced seems strange.

In terms of “linear algebra,” we don’t have to transpose the two matrices: eigen_vecs_subset and X_demeaned.

If we do the matrix multiplication “X_demeaned @ eigen_vecs_subset” in this order, we should simply multiply

X_demeaned by eigen_vecs_subset without them being transposed.

I’m sorry that I asked you via my previous email. First, I tried to post my question on the community forum,

but the coursera system didn’t show any page on my browser.

I want to know where I can ask questions, I mean how to access the forum for each week.

The only way I know is to click on the link for the community in Week 1.

Thanks to mentor support like yours, I can keep studying NLP without dropping out.

I never give up finishing this specialization and will obtain the certificate.

Regards,

Setsuro