Week 3: nn_model() fails

Hi there,
I am working the “Planar Data Classification” exercise and I am getting a results mismatch when running the nn_model() code.

REMOVED MY CODE PER REQUEST

Each of these functions individually passed. I even compared the numeric values of expected and actual and they were the same.

However, nn_model fails() because after the 0th iteration, the cost does not match the expected.

My cost values:
Cost after iteration 0: 0.692739
Cost after iteration 1000: 0.000359
Cost after iteration 2000: 0.000179
Cost after iteration 3000: 0.000120
Cost after iteration 4000: 0.000090
Cost after iteration 5000: 0.000072
Cost after iteration 6000: 0.000060
Cost after iteration 7000: 0.000051
Cost after iteration 8000: 0.000045
Cost after iteration 9000: 0.000040

Any ideas on how I can debug this?

Have you tried running the grader to see what it says about your earlier functions? The test cases in the notebook are not very good at catching hard-wiring of certain dimensions. Of course it will fail your nn_model code, but the hope is you’ll learn something about where else there may be problems.

I think your code looks correct as far as I can see. Of course the initialize call is outside the for loop.

Once we get this figured out, please do us a favor and remove the source code from your original post. We don’t want to leave the solutions sitting out there on the forum.

Thanks. I will run the grader and see what it reveals.

I will remove the source code from my original post right away. Apologies!!

It does help to see the code for debugging purposes, but we just don’t want to leave it sitting there very long. Thanks for taking care of that!

I am curious what you mean by “running the grader”. How would I do this?

I figured out the issue. It was an error in my code in the back propagation step in computing dZ1.

I hoped the cell checker was more robust in catching this. My question about how to run the auto grader still holds though.

I just meant submitting your code to the grader the usual way to see what the grader says. Just click “Submit Assignment” and then view the feedback as usual.

If you had a problem in which the test case in the notebook did not catch your error in the dZ1 logic, that would be worth knowing more about. The most common error on that step is misreading the formula and using dW2 instead of W2, but the test case in the notebook does catch that error. Can you say more about what your mistake was? If we can get the test cases strengthened, that would be a good thing.

Update: I tried making the dW2 mistake and that can’t have been your mistake: in that case the gradients just get very small really quickly and the costs don’t change any more after the 1000th iteration. So it would be interesting if you could describe your mistake. Let’s try describing it in words first and see if I can understand from that. :nerd_face: Thanks!

Update2: I think I found it. If you use Z1 instead of A1 in the formula for g’(Z1), I get the cost values you show and in fact the test cases in the notebook do not catch it. Eeeek! I will report that as a bug.

1 Like

I made the mistake in “Update 2” when I was implementing dZ1. I think the cost at iteration 0 matches and others fail.

1 Like

Thanks for confirming! I will file a bug about this.

Update: Just to close the loop on this, if I submit to the grader with the Update2 mistake described above, the results are the same as with the test cases in the notebook: it reports the failure only against nn_model and not where the actual bug is in back_propagation. So submitting to the grader gives you no additional information about where to look for the error.

1 Like

Update on this one: it turns out they upgraded the test cases on this exercise on May 25 and the new version of the tests catch the bug here. Now you get a failure on the “unit test” for back propagation with that bug and the grader reports errors against both back propagation and nn_model.