Failed Tests on final C2_W3 Assignment, but received 100 points

Same here. The failing unit test case is still there, but I’ve got a 100% grade.

Please guide me how to create a new issue ?

Same here; though i passed with 80 points; I am facing another issue in Exercise 7 where it shows a broadcast issue but i am unable to understand that; if it was broadcast issue code would not even run but code is running and cost getting calculated for all 3000 iterations. The output of the test execution says that the shape of b1 should be(3,1) - this is wrong. I have the correct output as (2,1) but somehow the test case is failing it. Same with W1 and W2 as well. Even the expected output shape matches with my ouptut but fails during execution of test cases. Strangely, no issue with b2 shape.

Same problem for me.

I’m still getting the same errors even though I first opened the notebook on May 27, that is several months AFTER the fix of March 6:

Unit test:
Expected:
-0.03577862954823146
Got:
-0.03577862954823145

Grading test:
Expected:
[[-0.05350702 -0.10250995]
[-0.07303197 -0.1295294 ]
[-0.07487082 -0.11145172]],
but got:
[[-0.88792757 -2.15678679]
[-0.90083658 -2.29294507]
[-0.90485966 -2.2175348 ]].

If I made the exact same mistake as @AntonB then I would like to know what it was.
Otherwise, if this is what it looks like, a a difference due to rounding, then I’d like everybody’s grade to be corrected and the tests updated not to be broken by such rounding errors.

Thanks!

I’ve dug deeper and can see a few oddities in the tests:

On one hand, some tests are set up to test exact values, e.g.

"expected": {
    "Z1_array": {
        "shape": (2, 2000), 
        "Z1": [
            {"i": 0, "j": 0, "Z1_i_j": -0.022822666781157443,},
            {"i": 1, "j": 1999, "Z1_i_j": -0.03577862954823145,},
            {"i": 0, "j": 100, "Z1_i_j": -0.05872690929597483,},
        ],},
}

Note the breaking value of -0.03577862954823145 hard coded there. This test over-tests the code and it breaks due to differences in the last digits.

On the other hand, some tests explicitly ignore values, e.g.

"expected": {
    "W1": np.zeros((n_h, n_x)), # no check of the actual values in the unit tests
    "b1": np.zeros((n_h, 1)), # no check of the actual values in the unit tests
    "W2": np.zeros((n_y, n_h)), # no check of the actual values in the unit tests
    "b2": np.zeros((n_y, 1)), # no check of the actual values in the unit tests
}

and the only thing this particular test checks is the shape of the parameters:
assert result["W1"].shape == test_case["expected"]["W1"].shape
This looks like an under-testing of the solution; and this is exactly where things seem to go wrong upon submission – this is the test for nn_model.

Without having seen the code of the final (or grading) tests, I can only speculate that the errors we are experiencing are due to these small differences in the last digits of certain results in an exercise (nn_model) that is tested much less strictly before submission than after.

An inconsistent randomisation process may potentially complicate things further:
I found that if, immediately after initialisation, I print(parameters) inside nn_model, the figures differ from those output from initialize_parameters in Exercise 3.
If, however, I hard-code the parameters in nn_model, i.e.

parameters = {
    'W1': np.array([[ 0.01788628,  0.0043651 ], [0.00096497, -0.01863493]]),
    'b1': np.array([[0.], [0.]]),
    'W2': np.array([[-0.00277388, -0.00354759]]),
    'b2': np.array([[0.]])
}

then the cost output from the first couple of iterations match exactly the Expected Output shown below nn_model: 0.693148, 0.693147, 0.693147, etc. By iteration 2995, however, the figures already show some divergence.

I’d like to ask DeepLearning.AI to fix the notebook in the following way:

  • by adding more meaningful test cases to the unit tests of initialize_parameters and nn_model;
  • by making sure that where value comparison is performed, it would allow for slight differences;
  • by making sure that randomisation works consistently.

Following the publication of an update, I’d like to ask for another 3 opportunities for each of us to submit the test and improve our overall scores and certificates.

Thank you very much.

2 Likes

Hey @Askaleto, @shivanandmn, @annoyingCode, @Andrea7, @GMSR, @daniilkorotin, @ABCarter, @Writobrata, @Igor_Kuts, @Paul_Hume,

First of all, apologies for the inconvenience caused to all of you due to this issue in the test-cases. The test-cases corresponding to exercise-4 have been modified, and all those of you who were facing issue in Exercise 4 (particularly one failing unit-test) shouldn’t face it any more. A huge thanks to all of you for your patience.

A special thanks to Janos for providing a detailed feedback, as well as the possible solutions to fix the issues. Although, the team has been able to incorporate some of the changes that you suggested, the other changes were not possible to incorporate, simply because of the existing structure of the test-cases.

Nonetheless, a huge thanks to you once again, and if possible sometime in the future, the team will surely incorporate these changes.

Cheers,
Elemento

Tagging the remaining learners here, @Vaso_Sapouna, @rabbia_Hassan, @kewal, @Eric_Polin and @Janos

P.S. - A single reply only allows to tag 10 learners.

Cheers,
Elemento