Wrong values for dw

T​here is an error in my code. It founds wrongs values for dw. Does anyone know what is the problem?

AssertionError Traceback (most recent call last)
in ----> 1 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target) 117 assert type(d[‘w’]) == np.ndarray, f"Wrong type for d[‘w’]. {type(d[‘w’])} != np.ndarray" 118 assert d[‘w’].shape == (X.shape[0], 1), f"Wrong shape for d[‘w’]. {d[‘w’].shape} != {(X.shape[0], 1)}"–> 119 assert np.allclose(d[‘w’], expected_output[‘w’]), f"Wrong values for d[‘w’]. {d[‘w’]} != {expected_output[‘w’]}" 120
121 assert np.allclose(d[‘b’], expected_output[‘b’]), f"Wrong values for d[‘b’]. {d[‘b’]} != {expected_output[‘b’]}"

AssertionError: Wrong values for d[‘w’]. [[0.]
[0.]] != [[ 0.00194946]
[-0.0005046 ]
[ 0.00083111]
[ 0.00143207]]

1 Like

My guess is that you forgot to fill in the code to retrieve w and b from the dictionary that was returned by the call to optimize. So you get the value of w that is the result of the call to initialize_with_zeros. Take another careful look at how the model code is supposed to work and how the w value is handled.

Also note that the error is talking about the w value, not dw. The w value returned by your model function is all zeros. So how could that happen?

1 Like

Hi @samuelcesarino : I had the same issue of you, make sure that when you call optimize function inside model one, you pass the same variable of num_iterations & learning_rate from the parameters of model function :

    # Gradient descent 
    params, grads, costs = optimize(w, b, X_train ,Y_train, num_iterations=num_iterations, learning_rate=learning_rate, print_cost=True)