Week2 Assignment2

The propagation function returns m values for the cost function? How to make return one value ?
Please check the following code.

def propagate(w, b, X, Y):
    
    / moderator edit: code removed
    
    return grads, cost
1 Like

Please don’t post your code on the forum. That’s not allowed by the Code of Conduct.
I have edited your post to remove the code.
If a mentor needs to see your code, we’ll contact you privately with instructions.

Hint:
Your cost calculation is using the scalar multiplication operator ‘*’. That doesn’t include computing the sums.

Try adding np.sum().

2 Likes

Thanks a lot, I am really sorry for not reading the code of conduct carefully.
yes, I have tried the np.sum solution and it works on the cost function, but I got an assertion error when trying the function on different values.

Please post a screen capture image that shows the error messages when you tried different values.

I have solved it.
Thanks.

In the last task, I used the functions build during the notebook, but I got the attached error.
Shall I reshape certain functions?
Thanks a lot in advance.

That means there is a bug in your code. It looks like your w value ended up being 4 x 4 instead of 4 x 1. That probably means your “update parameters” logic is incorrect.

Why do you have two sets of variables wt and bt versus w and b there? You should only have one set. And your values are obviously different, because the previous predict call with wt did not throw that error. Maybe the w is a global value that has nothing to do with what is actually happening there.

1 Like

I initiate wt, bt for the test set and w, b for the training set as I have to calculate Y_prediction_test, Y_prediction_train.

I don’t think there is any reason to reshape ‘w’.

Building test values into the function is not a good idea. Better if you create new test cases and pass different data to the function.

I don’t understand. There is one set of parameters that are the result of training with the training set. Then you use those parameters to make predictions on any data, including the test data.

You don’t need to do this.

got it,
Thanks.

Hello I am getting the error

AssertionError Traceback (most recent call last)
in
1 from public_tests import *
2
----> 3 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
131 assert type(d[‘w’]) == np.ndarray, f"Wrong type for d[‘w’]. {type(d[‘w’])} != np.ndarray"
132 assert d[‘w’].shape == (X.shape[0], 1), f"Wrong shape for d[‘w’]. {d[‘w’].shape} != {(X.shape[0], 1)}"
→ 133 assert np.allclose(d[‘w’], expected_output[‘w’]), f"Wrong values for d[‘w’]. {d[‘w’]} != {expected_output[‘w’]}"
134
135 assert np.allclose(d[‘b’], expected_output[‘b’]), f"Wrong values for d[‘b’]. {d[‘b’]} != {expected_output[‘b’]}"

AssertionError: Wrong values for d[‘w’]. [[ 0.14449502]
[-0.1429235 ]
[-0.19867517]
[ 0.21265053]] != [[ 0.08639757]
[-0.08231268]
[-0.11798927]
[ 0.12866053]]

The thing is all my previous functions have passed the test cases. Not entirely sure what to do.
I noticed that above you say to not use * to calculate the cost function. I thought it was needed since we want to compute the loss function first and then use np.sum to get the cost. Maybe the issue is there?

No, it’s fine to use * and np.sum to compute the cost. The cost is not what is wrong here. If your previous functions are correct, note that they can still give wrong answers if you pass them bad parameters. The most common mistakes here that cause wrong values is not passing all the parameters correctly when you call optimize from model. The bug is in your model code: you probably did not call optimize correctly. You must pass all the parameters, but not actually hard-code the values for them. That includes the number of iterations and the learning rate. If you don’t pass them, then it means you are hard-coding them to the default values declared in the definition of the optimize function, right?

I see thank you that fixed it!