[Week 2] Assignment 1: Operations on Word Vectors - Debiasing - Exercise 5.2

My assumption is

I am getting the same result as you did

If that is not the case, then, the error is in the different code. But, as you are aware of, all others in Martin’s post are correct. So, those should be a good reference for your debugging. :slight_smile:

1 Like

Exercise 3 - neutralize

after implementing the function, I am getting -3.8581292784004314e-17 after neutralizing instead of -4.442232511624783e-17

neutralize(word, g, word_to_vec_map)

()
cosine similarity between receptionist and g, before neutralizing:  0.3307794175059374
()
cosine similarity between receptionist and g, after neutralizing:  -3.8581292784004314e-17

have started with using np.power, np.sqrt np.sum function and np.linalg.norm as well

1 Like

I am getting the same result with this function -3.8581292784004314e-17

1 Like

Also in the last optional debias function, keep getting different output

1 Like

In my case, I had an issue in calculations for e_w1B_p and e_w2B_p.
I was using e_w1B instead of e_w1 for e_w1B_p.
The same applied to calculating the e_w2B_p.

Hope it helps!

3 Likes

I have implemented this equation as written and gotten the expected outputs.
However, after reading the reference paper The debiasing algorithm is from Bolukbasi et al., 2016 I must conclude that this equation has been copied wrongly. The equation in the paper occurs near the bottom of page 6 and it is very clear that the denominator should be the norm of the numerator. In the assignment this is not the case because
e_(w1B) is not equal to e_(w1) - mu_orth.

Furthermore it is stated clearly in the paper on page 3 in the subsection on Word embedding that they normalized each word to unit length. This is vital for the square root in the equation to be appropriate. Since this normalization is not done in the assignment the results would seem to be of questionable use.

1 Like

Hello, I am working on this assignment right now and except for the problem everyone mentioned, I have faced something strange. No matter I use norm_2 or norm_22 in the denominators of step5, the result is -0.23871136142883745 and +0.23871136142883745. But there is a difference between using norm_2 and norm_22, isn’t?

1 Like

I don’t understand your reply. Could you please click my name and message your notebook as an attachment, along with the questions you have?

1 Like

There is a mistake in your implementation.
In step 5, please fix the term inside the square root.

That said, you can print the values of L2-norm of mu_B and e_w1 and see that the difference between norms will be very close to 1 and hence the reason for there to be little difference in norms raised to power of 2.

1 Like

Oh thanks a lot! I found my mistake and fixed it. Now I can see the correct result. Also thank you for the explanation about the little difference in norms and norms raised to power of 2.
If I have other questions, can I ask them here or I should post them in the community?

1 Like

You’re welcome.

Please create a new topic if applicable. Don’t post code in public. It’s okay to share stacktrace though. Here’s the community user guide to get started.

2 Likes

I also have issue with this exercise. Now it looks it’s after numerous updates, and currently expected result is

While what I got, by following very closely equations is:

I’m pretty sure I have everything set as in equations above. Is there anyone who can confirm either that I got wrong result, or that the expected output again is configured wrongly?

1 Like

I added some print statements to my logic in that function and here is my output:

cosine similarities before equalizing:
cosine_similarity(word_to_vec_map["man"], gender) =  -0.1171109576533683
cosine_similarity(word_to_vec_map["woman"], gender) =  0.35666618846270376

norm(mu_orth) 0.9710905652537206
norm(e1) 1.0
norm(e2) 1.0
norm(e1_uncorrected) 1.3939214059353529
norm(e2_uncorrected) 1.3939214059353529
factor 0.23871136142883792
cosine similarities after equalizing:
cosine_similarity(e1, gender) =  -0.23871136142883795
cosine_similarity(e2, gender) =  0.23871136142883792

So you can see that my results are the same as the expected values. Just looking at this as a scientist, I think the evidence suggests that your code is not correct. :nerd_face:

One approach to debugging would be add some of the intermediate results that I show and perhaps that will give some clue as to where you are going off the rails …

2 Likes

@paulinpaloalto thanks for the feedback, it helped me to coin the issue.

In computations for e_w1B and e_w1B I misread · for * (so actually I’ve used element-wise multiplication instead of dot product).

After updating, I got expected results :+1:

2 Likes

Nice work! It’s great that you were able to solve the issue just based on those additional intermediate values.

1 Like