Course 1 Week 2 Assignment Exercise 6

[Images deleted by Mentor]

I passed exercise 5 but my function in exercise shows an error in exercise 6. Please help

Hey @dani0047,

I have deleted your code as it’s against our community guidelines. Your error means you’re trying to perform a broadcasting operation on arrays with incompatible shapes.

So make sure that the arrays have compatible shapes.
Kindly post your error message only or a screenshot without the code used for grading.
Regards,
Jamal

Hey @Jamal022 , thanks for replying. I will take note of what should be posted here. My problem is if my code is able to pass exercise 5, why is there any error prompting to my exercise 5 when the function is run at exercise 6. Does this mean that the dataset provided in the question has different shape from what it supposed to be?

Please post a screen capture image that shows your error message for Exercise 6.


Sorry, I missed the image in my previous post. I have already passed the exercise 5 test and it shows the error above for exercise 6

A perfectly correct function can still throw errors if you pass it bad parameters, right? Note that the result of np.dot(w.T, X) should be a 1 x m row vector and b should be a scalar. But in your case the first operand turns out to be 2 x 3 and b turns out to be a 2 x 2 matrix. So how did that happen? The point is that propagate does not modify w, X or b: they are just passed in as wrong values. The bug is most likely in your “update parameters” logic, which is a key part of the optimize function. Put some prints in optimize to see what is going on.

image

This error message popped out in exercise 5

In exercise 6, prior to activating the function propagate() , I have checked that the shape of W and X is (2, 1) and (2, 3), and b is a float number. So np.dot(W.T, X) +b will give an output shape of (1,3). So I am not really sure where went wrong

The correct formula for A is sigmoid(w^T \cdot X + b). Your propagate code already passed the test cases, so there’s no need to change it. That was my previous point: the bugs are in optimize. Check the type and shape of w and b before and after the “update parameters” logic. My guess is that the fault is not thrown on the first iteration of the loop in optimize but on the second or third.

If I had to guess, I’ll bet that you are using dw instead of db to update b. A simple “copy/paste” error.

Thanks everyone. I made a very simple mistake by putting b = w - learning_rate *db, no wonder it is in matrix form. Sorry for your time

Glad to hear that you found the bug! Programming is a game of details: a single character wrong can ruin everything. :scream_cat:

And don’t feel bad about making that kind of mistake: we all do it. The key point is that debugging is part of the job: no non-trivial code you ever write is going to be perfect the first time you write it. The key skill is learning how to analyze the error and then work your way backwards to figure out where the problem is. In this example, the key thing to interpret from the error is that b is not a scalar. Then the question is “how did that happen?”