Week 3 Programming Assignment Exercise 6 only 10/20 points

I keep getting only 10/20 points for exercise 6. I’ve examined this code and tried many variations. It’s all super simple and straightforward, so I can’t imagine what I might have done wrong. The output of every exercise matches the “Expected Output” in the notebook. However, my exercise 5 output does have 1 less decimal digit at the very end compared to the expected output:
My output:

cost = 0.693147770382682

Expected output

cost = 0.6931477703826823

I get the following unit test failure in the grader output:
Failed test case: “default_check”. Wrong array W1…
Expected:
[[ 0.01794023 0.0043047 ]
[ 0.00104143 -0.01872273]],
but got:
[[ 0.01792225 0.00432483]
[ 0.00101594 -0.01869346]].

Failed test case: “default_check”. Wrong array b1…
Expected:
[[-1.84034762e-06]
[-2.54245039e-06]],
but got:
[[-1.22689841e-06]
[-1.69496693e-06]].

Failed test case: “default_check”. Wrong array W2…
Expected:
[[-0.0015988 -0.00260943]],
but got:
[[-0.0019905 -0.00292215]].

Failed test case: “default_check”. Wrong array b2…
Expected:
[[0.00283435]],
but got:
[[0.00188957]].

Failed test case: “extra_check”. Wrong array W1…
Expected:
[[-0.00081229 -0.00628698]
[-0.00044829 -0.00476073]
[ 0.00899339 -0.00154505]],
but got:
[[-0.00081985 -0.0062785 ]
[-0.00044323 -0.00476645]
[ 0.00899339 -0.00154506]].

Failed test case: “extra_check”. Wrong array b1…
Expected:
[[0.01769593]
[0.00483811]
[0.01769627]],
but got:
[[0.0176961 ]
[0.00483799]
[0.01769627]].

Failed test case: “extra_check”. Wrong array W2…
Expected:
[[-0.01309373 0.00889263 0.0093344 ]],
but got:
[[-0.01311619 0.00886943 0.00708614]].

Failed test case: “extra_check”. Wrong array b2…
Expected:
[[0.01178302]],
but got:
[[0.01173092]].

Passing the tests in the notebook does not prove your code is correct.

The pass-limits on this assignment are particularly tight, because it expects your code to be implemented in a specific way. Whether this is a good idea remains to be determined.

Note that where your code is failing tests, the errors are in the 4th or 5th decimal digit. I think it’s checking for 6 digits of accuracy.

Thank you @TMosh. I’m confused because the code is so straightforward. Almost all of the code is just given. Is it possible the seed is different from my notebook and the grader? My seed was set with:

np.random.seed(3)

Maybe it got changed somehow? Exercise 6 is just so incredibly basic, how could I have done something different than what was expected? Exercise 5 was very basic too, but I was missing one decimal digit. Do I need for convert values to double?

No, the seeds have not been changed. There must be something wrong with your code if you are getting different answers.

Just to make sure we are on the same page here, are you saying that you pass all the tests if you run the unit tests in the notebook? Or are you getting failures in the unit tests? If so, then there is no point in submitting to the grader.

The results from Exercise 5 compute_cost() are not used in Exercise 6 update_parameters(), so don’t look to that as an explanation.

The code you add to update_parameters is computationally simple - but you must be absolutely certain that all of the data you are using comes from the “parameters” and “grads” dictionaries, and that you do not modify the “learning_rate” argument.

All the unit tests in the notebook pass and all the expected outputs in the notebook match my outputs in the notebook. But when I go to submit, my grade is 90% because I got 10/20 on exercise 6. The grader output shows the expected output of the unit tests I failed (I appended to my original post above).

I can’t imagine what’s wrong with my code unless I was supposed to convert from float to double (or vice-versa?). The code is so simple.

Also, if I made a mistake somewhere in update_parameters, wouldn’t that be reflected in the next code block which compares my output to the expected output?

Regarding exercise 5’s compute_cost(), I just don’t know how the grader is performing these unit tests. They may be computing costs and updating the parameters based on that.

I’m 100% certain my W1,b1,W2, and b2 is coming from the parameters dictionary. And the equivalent d gradients are coming from the grads dictionary. I use learning_rate when updating the parameters from their gradients, but I never modify the learning rate.

I’d love to post my code here, but I think that’s against the rules.

{edited}

1 Like

Thank you @TMosh ! I’ve replied.

@TMosh figured it out, I was using -= for floating point values instead of x = x - .... I didn’t know that was a problem.

2 Likes