Logistic_Regression_with_a_Neural_Network_mindset, week #2, exercise #5 question

Hi!
I’m wondering if there is someonw with whom I could share my code on the week #2 assignment. I’m hung up on the exercise #5 propagate problem. I’ve solved “db”, but I’m off a little bit on the “cost” value, as well as “dw”. I have gone through the forum looking for advice, and I have spent 3+ hours trying to figure it out, but I’m stuck. Any help would be appreciated!

I am receiving the following error:

AssertionError Traceback (most recent call last)
in
9
10 assert type(grads[“dw”]) == np.ndarray
—> 11 assert grads[“dw”].shape == (2, 1)
12 assert type(grads[“db”]) == np.float64
13

AssertionError:

And these are the results I’m getting for the above items:

db = -0.1250040450043965
dw = [[-6.78089927e-05 2.51693779e-01 -9.10653588e-04]
[-2.03426978e-04 -6.29234448e-02 -2.91409148e-03]]
cost = -0.37501213501318953

I don’t know if anyone can help me without my posting code, and I want to honor the honor code, so is there anyone who could look at my code and tell me how I might solve the problem?

Thank you!

The most common issue here is incorrect indentation. That’s how Python defines blocks of code.

So check that everything you want to execute in a block has the same indentation.

Also be very careful if the notebook has any “Hint” code. If you copy-and-paste the hint code, you will almost certainly get indentation errors that cause your code to work wrongly. The easy way to detect this is if any of the keywords or variable names in your code appear in a red font.

1 Like

Thank you for the quick reply. All the code is indented correctly. (I could demonstrate that but I’m not allowed to post the code).

I’m not sure why I’m getting the assertion error.

It’s because the size of the dw vector is not correct.

Your dw appears to be a size (2x2) matrix.

Thank you. Yes, I’m not sure why, though, or how to fix it. Any suggestions?

Lots of possible reasons.

For example, if you add a (2 x 1) vector to a (1 x 2) vector, you’ll get a (2 x 2) result. This is because the code will automatically apply broadcasting.

Also if you multiply a (2 x 1) vector by a (1 x 2) vector, you’ll get a (2 x 2) result. That’s because it’s how matrix products work.

So I recommend you add some print() statements to display the shape of your variables, before and after any math processes.

1 Like

Thank you. That helped. I got it figured out. Thanks again!

1 Like

Hello All,

Pls I need a favour. I am stuck on Question 5 from Week 2 assignment. I got all the other codes running successfully except this one. I have tried all resolving it but this is the lastest error codes I am getting. I will appreciate any help
“”“”

AssertionError Traceback (most recent call last)
in
1 from public_tests import *
2
----> 3 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
130
131 assert type(d[‘w’]) == np.ndarray, f"Wrong type for d[‘w’]. {type(d[‘w’])} != np.ndarray"
→ 132 assert d[‘w’].shape == (X.shape[0], 1), f"Wrong shape for d[‘w’]. {d[‘w’].shape} != {(X.shape[0], 1)}"
133 assert np.allclose(d[‘w’], expected_output[‘w’]), f"Wrong values for d[‘w’]. {d[‘w’]} != {expected_output[‘w’]}"
134

AssertionError: Wrong shape for d[‘w’]. (12288, 1) != (4, 1)
“”“”"

You must be referencing global variables instead of the parameters to the function. The real data has 12288 features, but the test cases use 4 features for ease of checking.

Check what you pass to initialize_with_zeros as the dimension argument. You must be referencing some variable that is not defined in the local scope of the model function, e.g. dim or train_set_x.

Thanks for the tip. I have made the corrections but I started having a different error:
“”"
----> 3 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
131 assert type(d[‘w’]) == np.ndarray, f"Wrong type for d[‘w’]. {type(d[‘w’])} != np.ndarray"
132 assert d[‘w’].shape == (X.shape[0], 1), f"Wrong shape for d[‘w’]. {d[‘w’].shape} != {(X.shape[0], 1)}"
→ 133 assert np.allclose(d[‘w’], expected_output[‘w’]), f"Wrong values for d[‘w’]. {d[‘w’]} != {expected_output[‘w’]}"
134
135 assert np.allclose(d[‘b’], expected_output[‘b’]), f"Wrong values for d[‘b’]. {d[‘b’]} != {expected_output[‘b’]}"

AssertionError: Wrong values for d[‘w’]. [[ 0.14449502]
[-0.1429235 ]
[-0.19867517]
[ 0.21265053]] != [[ 0.08639757]
[-0.08231268]
[-0.11798927]
[ 0.12866053]]

“”"

Ok, the next set of common mistakes is in the arguments you pass to optimize. Are you sure that you are not “hard-coding” any of them? There should be no equal signs in the argument list, right? Because that would mean you are ignoring the value that is passed in to model at the top level.

The point is that defining a function is completely different than calling a function: you can’t just “copy/paste” the definition of the function as the invocation of the function.

Thanks for this. My mistake. We learn everyday. Thanks a zillion

1 Like

Great! If you are new to python, it would be worth googling “python named parameters” or “python keyword parameters” and reading up on the behavior of those.

1 Like