Wrong values for d['w']

When I try to test the model definition, I get the following output:

----> 1 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
    117     assert type(d['w']) == np.ndarray, f"Wrong type for d['w']. {type(d['w'])} != np.ndarray"
    118     assert d['w'].shape == (X.shape[0], 1), f"Wrong shape for d['w']. {d['w'].shape} != {(X.shape[0], 1)}"
--> 119     assert np.allclose(d['w'], expected_output['w']), f"Wrong values for d['w']. {d['w']} != {expected_output['w']}"
    120 
    121     assert np.allclose(d['b'], expected_output['b']), f"Wrong values for d['b']. {d['b']} != {expected_output['b']}"

AssertionError: Wrong values for d['w']. [[ 0.28154433]
 [-0.11519574]
 [ 0.13142694]
 [ 0.20526551]] != [[ 0.00194946]
 [-0.0005046 ]
 [ 0.00083111]
 [ 0.00143207]]

I have passed all the tests from the previous steps, so I guess my function definitions are correct. This is the logic I have followed:

  1. Initialize w and b with a dimension of X_train.shape[0]
  2. Get params, grads and costs using the optimize function I defined, with arguments w, b, X_train and Y_train
  3. Retrieve w and b from the params dictionary
  4. Call the predict function with the updated params w and b, both on X_test and X_train

I think I have performed all the steps propertly, however the w values calculated by the gradient descent were not as expected. As I have said, I passed all the function tests until this point.

I faced the exact same error.
It turned out that I was not passing the number of iterations, the learning rate and the print_cost=False parameters while calling the optimize function.

You can check if this is also the case for you.

4 Likes

Thank you very much, I had the very exact problem. I thought the predefined values on optimize function were the same as the ones in the model definition. I passed those parameters when calling the function and I passed all the tests.

1 Like

I have the same errors, I tried the use the same values of num_iterations, learning_rate, print_cost, as in the optimize function but nothing changed.
I still have the error. Is that the way to do it or I am doing something else wrong?

Hi @Chinwe, it is not about using the same values but using the parameters that the function is receiving, for example, model function receives as parameter num_iterations you have to use num_iterationswhen calling optimize, not a hardcoded value but the parameter itself.

In a simpler example let’s assume you have two functions A and B:

def B(my_parameter):
   print(my_parameter)

def A(my_parameter="Hello world"):
   B(my_parameter)

In function A you pass the parameter to the function B, not a specific value, whatever the function A receives is passed on to B.

2 Likes

Thank you, it works fine now.

I had the same problem. for those with less programming knowledge like me:

def model(X_train, Y_train, X_test, Y_test, num_iterations=2000, learning_rate=0.5, print_cost=False):
    ...
    ...
    params, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)
    ...
    ...
    return d

in the beginning, the “learning_rate” defined as 0.5 and the answer is based on it, but if you copy and paste your code (like me :slight_smile: ) from optimize function in Exercise 6, you force it to pass a different “learning_rate”, and you will get different results.

Example 6
optimize(w, b, X, Y, num_iterations=100, learning_rate=0.009, print_cost=False)

Thanks, your post helps me