Exercise 4 - w4_unittest.test_back_prop(back_prop) is wrong

The 2nd back_prop test is wrong. The parameter we get is batch_size = 2 when x.shape = (5, 4) Please fix this with the correct expected results and batch_size = 4.

Since the results are broken, even if you overwrite batch_size = x.shape[1], we’ll get wrong values.

@Mubsi @paulinpaloalto FYKI.

Hi @balaji.ambresh,

Firstly, I have been told you are a newly added mentor to our courses, so welcome! Looking forward to working with you.

You must have been shared some GitIssue documents and yesterday were given access to the repos. I’d like you to make an issue of this in the NLP repo. Similarly in the future, put all the issues you find in the git repos.


@Mubsi thank you for giving access to MLEP and DLS. Looking forward to working with you as well. I’m yet to complete the NLP specialization. So, I don’t have access to NLP repo.
Could you please give me access to the other repos I’ve mentioned in our conversation?
Thank you.

Hi @balaji.ambresh,

We haven’t been able to reproduce this on our end. Could you upload your notebook here, and let me know what exactly can we do to reproduce this ?


Hello @Mubsi

Lab id for C4W4 - tzpclcai.

Can you help me with week 4 backpropagation regarding what mistake I am making?
Thank you :slightly_smiling_face:


Your mistake is in grade_b1 and grad_b2 of Ex 4. Your logic of sum in those statements is incorrect.


Can anyone tell if this is sorted. I’m still getting error with regards to the dimensions.

@Mubsi @sid94

Have you seen this thread?

yes. But that thread is concerned with a different issue. What we have here is, x has dimensions (5,4) but the batch_size is 2. Anyway, i found a workaround, instead of using batch_size variable, I used x.shape[1]. But clearly it’s a bug that needs to be resolved.

They both point to the same issue.
While we wait for Mubsi to respond, please find someone who is an NLP mentor and ping them to reply to this thread. I’ll do the same as well.

Hi @sid94

I am not sure where do you get your dimensions from. If we are talking about C2W4 Exercise 04, then batch_size is 4 and the shape of tmp_x is (5778, 4) :

Expected output
get a batch of data
tmp_x.shape (5778, 4)
tmp_y.shape (5778, 4)

Initialize weights and biases
tmp_W1.shape (50, 5778)
tmp_W2.shape (5778, 50)
tmp_b1.shape (50, 1)
tmp_b2.shape (5778, 1)

Forwad prop to get z and h
tmp_z.shape: (5778, 4)
tmp_h.shape: (50, 4)

Get yhat by calling softmax
tmp_yhat.shape: (5778, 4)

call back_prop
tmp_grad_W1.shape (50, 5778)
tmp_grad_W2.shape (5778, 50)
tmp_grad_b1.shape (50, 1)
tmp_grad_b2.shape (5778, 1)

Or are you referring to a different part of the Assingment?


I’m unsure if the tests have been changed since I took the course. But, could you please check the tests based on this thread and this other one?

Hi @balaji.ambresh

The tests w4_unittest haven’t been changed for the last 10 months and they work correctly - I’ve just rerun then Assingment and all the test passed.


This is with respect to w4_unittest.test_back_prop(back_prop).

Please add this print statement inside the back_prop function:
print(f'batch_size = {batch_size} and x.shape = {x.shape}')

This will be the output upon running the test:

batch_size = 4 and x.shape = (10, 4)
batch_size = 2 and x.shape = (5, 4)
 All tests passed

The 2nd test case has "batch_size": 2. See the corresponding x:

                "x": np.array(
                        [0.0, 0.0, 0.0, 0.0],
                        [0.0, 0.0, 0.0, 0.0],
                        [0.0, 0.0, 0.0, 0.0],
                        [0.0, 0.0, 0.0, 0.0],
                        [0.0, 0.0, 0.0, 0.0],

Why is batch size set to 2 for this?


One of the test cases in function test_back_prop has batch_size set to 2 (another case has batch_size:4. This is why you get the print outs that you get if that is what you are asking?

Batch size should have the same value as x.shape[1].

Theoretically you are correct but my point was that regarding to unit_test failing they passed for me. I guess is that unit tests have some logic that cannot be fixed quickly and that is why it wasn’t done already.

P.S. I don’t see the issue raised in the github repo. You should raise it.

@arvyzukai Got it. Thank you for confirming the bug. Could you please file a ticket on github to fix this? I’ve completed the courses but not officially a mentor for NLP.

I get the same output. Can you please be a little more specific on what part is incorrect? Thanks in advance.