Week 2 exercise 6

I having issues in the def optimize(w, b, X, Y, num_iterations=100, learning_rate=0.009, print_cost=False): """ function. I am getting the below error.
The num_iterations = 101 and i am calling the propagate function as below.
grads, cost = propagate(w, b, X, Y)

  assert type(costs) == list, "Wrong type for costs. It must be a list"
     64     assert len(costs) == 2, f"Wrong length for costs. {len(costs)} != 2"
---> 65     assert np.allclose(costs, expected_cost), f"Wrong values for costs. {costs} != {expected_cost}"

I think you mean Course 1, not Course 2, right?

The bug is probably not in propagate, if that function already passes its unit tests. Your call to propagate also looks correct. So the other place to look would be your “update parameters” logic, where you apply the gradients and the learning rate to update w and b.

Can you show the actual output of your costs versus the expected ones? The failing assertion should have printed them all.

Yes, it is Course 1, the exception is below for the optimize function.

AssertionError                            Traceback (most recent call last)
<ipython-input-16-3483159b4470> in <module>
      7 print("Costs = " + str(costs))
      8 
----> 9 optimize_test(optimize)

~/work/release/W2A2/public_tests.py in optimize_test(target)
     63     assert type(costs) == list, "Wrong type for costs. It must be a list"
     64     assert len(costs) == 2, f"Wrong length for costs. {len(costs)} != 2"
---> 65     assert np.allclose(costs, expected_cost), f"Wrong values for costs. {costs} != {expected_cost}"
     66 
     67     assert type(grads['dw']) == np.ndarray, f"Wrong type for grads['dw']. {type(grads['dw'])} != np.ndarray"

AssertionError: Wrong values for costs. [array(5.80154532), array(0.69318437)] != [5.80154532, 0.31057104]

Thanks for the details. You can see that your first cost value (after 0 iterations) is correct, but the value after 101 iterations is wrong. It’s actually larger, instead of smaller after 101 iterations. Did you check your logic for updating w and b? E.g. are you maybe adding instead of subtracting the update terms?

Here is my logic for updating w and b

    w = np.subtract(w , learning_rate*w)
        b = np.subtract(b , learning_rate*b)

You don’t really need to use np.subtract for this, but that’s not the problem. You are not using the gradients dw and db to do the update. Please compare the code you wrote to the formulas for what is supposed to happen.