Week 3 Programming Exercise 75/100 All Tests Pass

I have completed the Week 3 Programming Exercise with all Graded cells reporting “All Tests Passed.”

But, every time I submit to the grader I get 75/100 with the output saying 2 cells failed.

It does not report exactly which fail but says that they could be hidden. Is there any way to get feedback on what I missed?

Hi @maortiz, I remember that particular frustration well. Sometimes it is the case that a test will pass, but with an output different from the expected output. If so, that’s where I would begin my investigation. If you have no such occurrences, I too would be very interested in something more definitive. Any takers?

1 Like

I went back and re-ran all cells and verified that the output from my cells matched the expected output usually outlined in the following cell.

Everything checked out and I submit again. But still I get 75/100.

The grader says only this without further explanation of which 2 cells it believes have failed:

[ValidateApp | INFO] Validating ‘/home/jovyan/work/submitted/courseraLearner/W3A1/Planar_data_classification_with_one_hidden_layer.ipynb’
[ValidateApp | INFO] Executing notebook with kernel: python3
Tests failed on 2 cell(s)! These tests could be hidden. Please check your submission.

When you run the cell following your nn_model() function, how do the costs behave as the iterations progress?

It appears to be decreasing

Cost after iteration 0: 0.693048
Cost after iteration 1000: 0.288083
Cost after iteration 2000: 0.254385
Cost after iteration 3000: 0.233864
Cost after iteration 4000: 0.226792
Cost after iteration 5000: 0.222644
Cost after iteration 6000: 0.219731
Cost after iteration 7000: 0.217504
Cost after iteration 8000: 0.219430
Cost after iteration 9000: 0.218551

Hey, I seem to have the same problem, 75/100 and all tests pass. My cost behaves exactly the same as maortiz, but a few observations that might help:

  • The first time I ran my notebook, the cost was drastically lower after iteration 1000, more like 0.0001. After I restarted my kernel, I got the same costs as maortiz, and I can’t reproduce the low costs anymore.
  • There is no difference in accuracy for different numbers of hidden layers for me:
    Accuracy for 1 hidden units: 90.5 %
    Accuracy for 2 hidden units: 90.5 %
    Accuracy for 3 hidden units: 90.5 %
    Accuracy for 4 hidden units: 90.5 %
    Accuracy for 5 hidden units: 90.5 %
    Accuracy for 20 hidden units: 90.5 %
    Accuracy for 50 hidden units: 90.5 %

Hope we can resolve this, at least 70+% means a pass but I would like to see that nice 100/100 :slight_smile:

Yes, I noticed that as well. I didn’t mention it since it was in the optional section. All plots in step 6 look the same and have the same accuracy even though the interpretation seems to indicate they should be different.

Yes, but not like one would expect it to. It would typically (but not necessarily) be a more gradual decline. Here, we have one big leap down after the initial iteration, and then it’s basically flat. Also, note that the initial cost is equal to the expected output for the cost function test case. These things combined suggests to me that your nn_model() function is not receiving the proper inputs via the helper functions that you previously coded.

Here’s the drill: Study the “signatures” of all of your functions that help comprise the nn_model() function. The signature of a function is like the following:

my_function(arg1, arg2, ..., kwarg1 = value1, kwarg2 = value2, ...)

Note that the argument values can be any number of Python object types: int, float, string, list, np.array, Boolean string, etc

Please do the following review of your code:

  1. Make sure that you are using the arguments in your required code block. Do not “hard-code” a number or anything else. If you do, it may pass a test, but it will not generalize to other cases. If you are not using the arguments in the signature, it will be wrong. Start with layer_sizes(X, Y) and work your way down.

  2. Note the signature of the nn_model() function:

    nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False)

    Naturally, the same applies here. Note, however, that you are filling out your contribution to the function with calls function that you have already helped to complete. So the calls to those functions must be consistent with the arguments to the new function.

Let me know how your code review work out!

6 Likes

Well, that certainly explains my issue with the optional step 6 as well doesn’t it?

Thank you very much @kenb :+1: :+1:

100/100

Same for me, my error was in having hardcoded the values in initialize_parameters() function instead of using the parameters n_x, n_h and n_y. Thanks @kenb !

1 Like

Excellent! Onwards and upwards! :+1:

1 Like