Understanding debugging info - C1 - W3 - Ex4

I can’t figure out what the error msgs are trying to tell me:

If you get stuck, you can check out the hints presented after the cell below to help you with the implementation.
In [31]:

UNQ_C4

GRADED FUNCTION: predict

mentor edit: code removed

Click for hints
Once you have completed the function predict, let’s run the code below to report the training accuracy of your classifier by computing the percentage of examples it got correct.
In [32]:

Test your predict code

np.random.seed(1)
tmp_w = np.random.randn(2)
tmp_b = 0.3
tmp_X = np.random.randn(4, 2) - 0.5

tmp_p = predict(tmp_X, tmp_w, tmp_b)
print(f’Output of predict: shape {tmp_p.shape}, value {tmp_p}')

UNIT TESTS

predict_test(predict)
Output of predict: shape (4,), value [1. 1. 1. 1.]

AssertionError Traceback (most recent call last)
in
9
10 # UNIT TESTS
—> 11 predict_test(predict)

~/work/public_tests.py in predict_test(target)
78 expected_2 = [0., 0., 0., 1., 1., 0.]
79 assert result.shape == (len(X),), f"Wrong length. Expected : {(len(X),)} got: {result.shape}"
—> 80 assert np.allclose(result,expected_2), f"Wrong output: Expected : {expected_2} got: {result}"
81
82 print(‘\033[92mAll tests passed!’)

AssertionError: Wrong output: Expected : [0.0, 0.0, 0.0, 1.0, 1.0, 0.0] got: [0. 0. 0. 0. 1. 0.]

Expected output
Output of predict: shape (4,),value [0. 1. 1. 1.]

Please do not post your code on the forum. That is not allowed by the course community standard.

Your indentation for the prediction is not correct. It is outside the for-loop.

Thanks.

Sorry about the posting. How do others get their code onto the forum (I assumed it was OK because I saw other students’ “bad” code)?

Richard

Only post your error messages or asserts. Sometimes that includes a line or two of code. We try to clean up everything beyond that.

Unindented the prediction.

Still get
Output of predict: shape (4,), value [0. 0. 0. 1.]

AssertionError Traceback (most recent call last)
in
9
10 # UNIT TESTS
—> 11 predict_test(predict)

~/work/public_tests.py in predict_test(target)
69 raise ValueError(“Did you apply the sigmoid before applying the threshold?”)
70 assert result.shape == (len(X),), f"Wrong length. Expected : {(len(X),)} got: {result.shape}"
—> 71 assert np.allclose(result, expected_1), f"Wrong output: Expected : {expected_1} got: {result}"
72
73 b = -1.7

AssertionError: Wrong output: Expected : [1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0] got: [0. 0. 0. 0. 0. 0. 0. 1.]

Expected output
Output of predict: shape (4,),value [0. 1. 1. 1.]

1 Like

You have another indentation error. See if you can find it.

@Richard_Rasiej

Your code is good and it runs Ok!, but you have a problem with the implementation itself as your results do not match the correct ones in the test cases.

Go back a take a deeper look at your steps and compare them with the mathematical equations provided in the same notebook or lecture videos. and check if you applied your sigmoid function in the right place or not.

Still looking.

Is there a way within the Jupiter notebooks we are using to go into the code editor and enable the option which shows the tab and whitespaces?

@Richard_Rasiej

I believe that there is might be an addon that can help you do that but, in this case, you can’t use it.
so you what you can do if you are still getting IndentationError you can just copy each cell and check the code in it in any text error I suggest using vscode;

  • check that you are using indentation as all spaces or as all tabs.
  • check that all indentations are equal or as the standard (4 spaces).

Thanks. I will check it out. Also, if I have PyCharm or IDLE, can I easily import packages such as numpy, matplotlib.pyplot, utils, copy, and math (as shown in the first piece of executable code in the exercise)?

@Richard_Rasiej
of course, you can do that, you can even download the entire notebook as a python file just go to
(File → Download as → Python File or .py).

but remember you need to download also the entire workspace as there are helpers and utils placed outside the notebook. you can do so by
Lab Files → download all files.

I am sorry I am being so dense here (which is especially irritating given how quickly I seemed to be progressing up to this point).

My assertion errors are coming after I execute the test of the predict code. That code did not require me to add anything. Hence my error should be in the code I inserted into the definition of the predict function (I infer this from the fact that up to this point - the first three exercises - all tests are successfully passed and all output values match the expected values).

BUT, even when I click on all the hints for structuring the implementation, it appears I have done the coding correctly. Tom mentioned that I still had an indentation error that I had not fixed, but I can’t find it. I can’t find anything in all the optional labs I went through to give me a sense of what to look for.

I understand the math fine. Is it the syntax of loops that is messing me up? What is it about indentation that I don’t understand? I’m not really sure I understand what you mean by the implementation being wrong if the code is good.

No problem at all I am here for any help.

Look as I said your return value does not match the one that should be returned from the function as it throws an error when the test tries to compare your results with the correct ones.

If you did everything fine in math and code and still get an error.

My Advice (something I did before when I got stuck like this) is just to restart the notebook, and use a fresh one from the begging as there might be something else that is messing up your code.

I’ve tried going to the beginning and restarting and am starting to get confused by something new.

After Exercise 2, we are supposed to test our implementations of the compute_cost function by two different initializations. If the first one has initial_w = np.zeros(n) and initial_b = 0., shouldn’t our expected cost = 0.5 = 1/(1+e^0) since z_xb = 0, rather the 0.693 as given in the notebook (I got 0.014, but that’s a different issue).

Hi Richard,

I always find this way to be effective in discovering what goes wrong in my code, because it allows me to compare the code’s progress with my expectation. Please spend 15 minutes to read that post and try it in your work.

Raymond

OK. I haven’t found the problem yet, but have starting using your ideas for printing out intermediate variables to check the actual math.

In running gradient descent and then the 10,000 iterations to get the expected output of a cost of 0.30, I obtain w = [0.07125349 0.06482881], b = -8.188614567810179, which produces a decision boundary line of y = -1.099 x + 126.311. Neither of the two plotted lines (the one already in the narrative and the one produced by the helper function have a y - intercept greater than 100. Does this mean I already have a problem in my implementation, even before running the predict function, even though my code has “passed all the tests”?

Hello @Richard_Rasiej,

Let’s make things simpler and just focus on the predict function itself. This is the function this post is about. Right?

The predict function requires X, w, and b for its input, and if you follow my suggestion, you would need to artificially make, for example, an X of 3 samples and 2 features each, a w of 2 values (one for each feature), and a scalar b. And if you follow my suggestion, you would only need to calculate the expected prediction result based on this set of simple inputs.

This is simpler because there is no 10,000 iterations., no cost, no training of any model. Just the predict function. OK?

Raymond

Thanks. All problems fixed. Lab submitted. 100%. On to the next course!

Great! Glad to hear that! Happy learning!