W2 Programing Assignment Test Error

I passed all the previous tests, and came to the step of combining all functions into “model”. Then model_test(model) failed. I looked into the test script and found that my costs, w, b values were all correct, but the values of Y_prediction_test and Y_prediciton_train were wrong. However, in the script the Y values seemed to be hard coded arbitrarily, while the X values generated RANDOMLY. I don’t understand how randomly generated X values could be used in the test. I think random values won’t predict to the same Y values, when thrown into the model. Is my understanding wrong? Could anyone please explain it to me? Thank you.

The key thing to realize is that everywhere here when they use random number generation functions as part of the testing, they always set the seed to a known value first. That means that the results are actually reproducible. Here are the first two lines of the model_test function:

def model_test(target):

If the tests fail, that means your code is incorrect. Now you need to figure out why. These courses have been in use for more than 5 years at this point, so it’s not very likely that you have discovered some new bug in the test mechanisms. The probability is not zero, but it’s fairly low. :nerd_face:

The other general thing to point out is that just because your previous functions are correct, you can still call them incorrectly from model, so that’s the place to look for the mistake.


Ah got it, that’s a clever way of writing tests. Thank you. I’ll look into my code to find the bug.

Let us know if you need further help. Maybe I should give a little more detail about what I mean about the reliability of the tests in the notebook:

  1. They can give “false positives”, meaning that they sometimes pass incorrect code. It’s hard to write tests that catch all possible mistakes.
  2. They never give “false negatives”, meaning that if the tests fail, then something is wrong with your code.

Given your description where your w and b values are correct, but the predict outputs fail, reminds me that there are some possible bugs in predict that are not caught by the test cases for predict. So if you can’t see any problems in your model code, it’s worth a careful look at predict as well.