Week 4, First programming assignment does not grade my work

Hello,

I have completed the first programming assignment for week 4. In the notebook, everything seems sorted out, and all test has been passed successfully. However, the grader shows me 0/100 for all my submissions. Here I have attached a pic of the grader output and the result of validating the notebook for my implementation.

Cheers,
Mohamad


That looks like the message from Validate button. You need to click on blue Submit Assignment button on top-right.

I did push on the submit button. The problem is that while all the tests are passed in the notebook, the grader shows


me 0/100. Here I attached another pic from my submissions.

What’s the message displayed when you click on Show Grader Output link?

Here I have copied all the message, which is shown in the show grader output link:

[ValidateApp | INFO] Validating ‘/home/jovyan/work/submitted/courseraLearner/W4A1/Building_your_Deep_Neural_Network_Step_by_Step.ipynb’
[ValidateApp | INFO] Executing notebook with kernel: python3
Tests failed on 10 cell(s)! These tests could be hidden. Please check your submission.

The following cell failed:

parameters = initialize_parameters(3,2,1)

print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))

initialize_parameters_test(initialize_parameters)

The error was:

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-3-3984a613433b> in <module>
      6 print("b2 = " + str(parameters["b2"]))
      7 
----> 8 initialize_parameters_test(initialize_parameters)

~/work/submitted/courseraLearner/W4A1/public_tests.py in initialize_parameters_test...
     30     ]
     31 
---> 32     multiple_test(test_cases, target)
     33 
     34 

~/work/submitted/courseraLearner/W4A1/test_utils.py in multiple_test(test_cases, ta...
    140         print('\033[92m', success," Tests passed")
    141         print('\033[91m', len(test_cases) - success, " Tests failed")
--> 142         raise AssertionError("Not all tests were passed for {}. Check your ...
    143 

AssertionError: Not all tests were passed for initialize_parameters. Check your equ...

==========================================================================================
The following cell failed:

t_dZ, t_linear_cache = linear_backward_test_case()
t_dA_prev, t_dW, t_db = linear_backward(t_dZ, t_linear_cache)

print("dA_prev: " + str(t_dA_prev))
print("dW: " + str(t_dW))
print("db: " + str(t_db))

linear_backward_test(linear_backward)

The error was:

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-16-3427f65cb756> in <module>
      6 print("db: " + str(t_db))
      7 
----> 8 linear_backward_test(linear_backward)

~/work/submitted/courseraLearner/W4A1/public_tests.py in linear_backward_test(targe...
    308     ]
    309 
--> 310     multiple_test(test_cases, target)
    311 
    312 def linear_activation_backward_test(target):

~/work/submitted/courseraLearner/W4A1/test_utils.py in multiple_test(test_cases, ta...
    140         print('\033[92m', success," Tests passed")
    141         print('\033[91m', len(test_cases) - success, " Tests failed")
--> 142         raise AssertionError("Not all tests were passed for {}. Check your ...
    143 

AssertionError: Not all tests were passed for linear_backward. Check your equations...

==========================================================================================
The following cell failed:

t_dAL, t_linear_activation_cache = linear_activation_backward_test_case()

t_dA_prev, t_dW, t_db = linear_activation_backward(t_dAL, t_linear_activation_cache...
print("With sigmoid: dA_prev = " + str(t_dA_prev))
print("With sigmoid: dW = " + str(t_dW))
print("With sigmoid: db = " + str(t_db))

t_dA_prev, t_dW, t_db = linear_activation_backward(t_dAL, t_linear_activation_cache...
print("With relu: dA_prev = " + str(t_dA_prev))
print("With relu: dW = " + str(t_dW))
print("With relu: db = " + str(t_db))

linear_activation_backward_test(linear_activation_backward)

The error was:

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-18-1dd7958789b5> in <module>
     11 print("With relu: db = " + str(t_db))
     12 
---> 13 linear_activation_backward_test(linear_activation_backward)

~/work/submitted/courseraLearner/W4A1/public_tests.py in linear_activation_backward...
    378     ]
    379 
--> 380     multiple_test(test_cases, target)
    381 
    382 def L_model_backward_test(target):

~/work/submitted/courseraLearner/W4A1/test_utils.py in multiple_test(test_cases, ta...
    140         print('\033[92m', success," Tests passed")
    141         print('\033[91m', len(test_cases) - success, " Tests failed")
--> 142         raise AssertionError("Not all tests were passed for {}. Check your ...
    143 

AssertionError: Not all tests were passed for linear_activation_backward. Check you...

==========================================================================================
The following cell failed:

t_AL, t_Y_assess, t_caches = L_model_backward_test_case()
grads = L_model_backward(t_AL, t_Y_assess, t_caches)

print("dA0 = " + str(grads['dA0']))
print("dA1 = " + str(grads['dA1']))
print("dW1 = " + str(grads['dW1']))
print("dW2 = " + str(grads['dW2']))
print("db1 = " + str(grads['db1']))
print("db2 = " + str(grads['db2']))

L_model_backward_test(L_model_backward)

The error was:

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-20-3ace16762626> in <module>
      9 print("db2 = " + str(grads['db2']))
     10 
---> 11 L_model_backward_test(L_model_backward)

~/work/submitted/courseraLearner/W4A1/public_tests.py in L_model_backward_test(targ...
    442     ]
    443 
--> 444     multiple_test(test_cases, target)
    445 
    446 def update_parameters_test(target):

~/work/submitted/courseraLearner/W4A1/test_utils.py in multiple_test(test_cases, ta...
    140         print('\033[92m', success," Tests passed")
    141         print('\033[91m', len(test_cases) - success, " Tests failed")
--> 142         raise AssertionError("Not all tests were passed for {}. Check your ...
    143 

AssertionError: Not all tests were passed for L_model_backward. Check your equation...

While it seems that all the test have been failed on the grader, the validation of my implementation on the notebook passes all the tests (it is worth mentioning that running the cells one by one also results in executing all the tests successfully on the notebook).

Please show us the actual output you get when you run the test in the notebook for the first initialize routine. It should look like this:

Here is attached the actual output I get when I run the test in the notebook for the first initialize routine.

Your b values are the wrong shape. They are 1D arrays. Compare to what my output shows. See the difference? Your syntax for np.zeros is incorrect. It is a bug in the tests that the test passes. I will file that bug in the morning. Apparently the grader test cases catch your bug, but the tests in the notebook do not.

Than you so much for your help. That solved my problem.

Cheers,
Mohamad

1 Like

Interesting! I tried making the same mistake that you did, so that I could understand the behavior and file a bug about it. But what I found is that in my case the test cases in the notebook do fail in exactly the way that you’d hope:

So I believe this says that you have an out of date version of the notebook and the test cases. Unfortunately, the test cases are in a separate file, so the procedure to update to the latest version is a little complicated. You need to click “File → Open” and then delete all the “dot py” files, in particular public_tests.py and test_utils.py, rename your notebook and then click “Help → Lab Help → Get Latest Version” as described on the FAQ Thread. Once the refresh completes, you’ll then need to copy/paste your completed code from your saved copy of the notebook to the new one.

Thank you so much for the support. There was another problem with the linear_backward test case. However, by comparing my results with expected results, I could pass all the tests both in notebook and the grader.

Hi @paulinpaloalto ,
I have what seems to be a related issue with the same assignment (C1, W4, PA1).
Firstly, the first two tests (for functions initialize_parameters and initialize_parameters_deep) fail for no apparent reason. All the other tests pass.
Secondly, when I submit the assignment I get 0/100 and the grader says “Tests failed on 10 cell(s)!” even though only two of the tests fail and it is only showing me those two in the grader output.
I updated the notebook and test cases as you have suggested above, but to no avail.
Do you have any advice as to how I should proceed?
Thank you

Interesting. We are still in the learning phase here with the new graders. My suggestion to start debugging this would be to show us exactly your output for the two test cases that fail in the notebook, as I showed in this earlier post on this thread. My guess is “no apparent reason” just means you’re not interpreting the output correctly, but we’ll soon see!

My point on the last comment being that, e.g.

b1 = [0. 0.]

is not the same thing as

b1 = [[0.], [0.]]

Those brackets there are not just for show :nerd_face: …

The problem in my case is with W1 and W2. I am attaching here the output from initialize_parameters_test (apologies it’s quite zoomed in to fit in one screenshot!). The error message is the same for initialize_parameters_deep_test but if needed I can attach that as well.

Note that all your values for W1 and W2 are positive. My guess is that you used the wrong PRNG function. Please have a more careful look at the instructions and compare them to what your code does. np.random.rand is not the same as np.random.randn. That little “n” on the end makes a big difference. :nerd_face:

Also note that there is an “apparent reason” for the test to fail, right? The values you got differ from the “expected values” shown in the notebook. :scream_cat:

1 Like

Thank you. Your guess is correct indeed so that fixed the issue. It was the reason for the failure (i.e. not getting the expected result) that was not apparent to me, but I am glad it was one you were able to identify. The other point that was confusing to me was the output from the grader which indicated that all functions failed. Perhaps that is because of dependencies between the tests for the subsequent functions and the two functions that failed(?). In any case, thanks again for the help.