Couse 1 Week 2 AssertionError: Wrong values for d['Y_prediction_test']. [[1. 0. 0.]] != [[1. 1. 0.]]

Hello, does anyone have the similar problem in the logistic regression assignment?

AssertionError: Wrong values for d['Y_prediction_test']. [[1. 0. 0.]] != [[1. 1. 0.]]

I have double checked my code hundred of times and also all previous functions (predict, optimize, params[‘w’] etc) pass tests.

Did anyone else encountered the same error?

1 Like

Interesting. Well, if you’re sure that all your previous functions passed their tests, then the bug must be in your model code. A correct subroutine can still give bad results if you pass it incorrect parameters. I added some print statements to my model code to show both predictions and here’s what I get on that test case:

type(X_train) <class 'numpy.ndarray'>
X_train.shape (4, 7)
X_test.shape (4, 3)
Y_test.shape (3,)
num_iterations 50 learning_rate 0.01
Y_prediction_train [[1. 1. 0. 1. 0. 0. 1.]]
Y_prediction_test  [[1. 1. 0.]]
All tests passed!

What value do you get for Y_prediction_train? Is it the same as I show there?

I took a look at the test code (you can too by clicking “File → Open” and then opening public_tests.py) and it turns out that it does check the correctness of your w and b values before it checks your predictions. But for some reason it checks the test predictions first and the train predictions second. So we can’t tell just from the test failure whether your predictions are wrong in both cases. That would be a useful clue, at least we hope so … :nerd_face:

Hello, thank you for your reply.

Tried to print these values and here is what I get:

<class 'numpy.ndarray'>
(4, 7)
(4, 3)
(3,)
50
0.01
[[1. 0. 0. 1. 0. 0. 1.]]
[[1. 0. 0.]]

So seems like the arguments of optimize function are correct. Also the output of optimize is okay because d and w pass tests.

Therefore I would assume that predict may have some bugs. However, I cannot find any bug. Also, the predict function passed all tests in the previous block of code.

Very interesting. The clue there is that it is only the index 1 of both prediction values that is wrong. Have a look at your logic in the predict function and look for a place where you used a hard coded 1 where you meant an i or the like. I’ll bet it’s in the “else” clause of your “if” statement: you always assign the 0 to index 1, regardless of what the i value is. You don’t get the other 0 answers wrong, because the template code initializes the predictions to zero.

If that logic passes the test case, it must be the case that the answer for index 1 just happens to be 0. This is a good example of why it is hard to write test cases that cover all possible errors. And why just passing the tests in the notebook does not mean your code is completely correct.

1 Like

Actually, I checked the test cell for predict in that assignment and there are two tests:

One visible in the notebook and one in the public_tests.py file. The correct output looks like this:

X.shape = (2, 3)
A.shape = (1, 3)
predictions = [[1. 1. 0.]]
X.shape = (3, 3)
A.shape = (1, 3)
All tests passed!

It is the case that the test in public_tests.py (the one that generates the “All tests passed!” message) does have 0 as the index 1 output, but the test visible in the notebook has 1 for index 1. But unfortunately they don’t show what the “Expected Output” is supposed to be, so you don’t have a way to notice your mistake from what they give you. Sigh …

Yes, that was exactly the case… I must have accidentally hardcoded 1 instead of using i. :man_facepalming:

Thank you for your help!