Logistic_Regression_with_a_Neural_Network_mindset error

Hi There, I’m doing exercise 8, after running “model_test(model)”, I got this: ---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
in
1 from public_tests import *
2
----> 3 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
126
127 assert type(d[‘costs’]) == list, f"Wrong type for d[‘costs’]. {type(d[‘costs’])} != list"
→ 128 assert len(d[‘costs’]) == 1, f"Wrong length for d[‘costs’]. {len(d[‘costs’])} != 1"
129 assert np.allclose(d[‘costs’], expected_output[‘costs’]), f"Wrong values for d[‘costs’]. {d[‘costs’]} != {expected_output[‘costs’]}"
130

AssertionError: Wrong length for d[‘costs’]. 20 != 1

Wonder what’s going on here? Thanks so much!

1 Like

That type of error means that you are “hard-coding” the values of some of the parameters that you pass on the call from model to optimize. The number of iterations is supposed to be a relatively small number that gives you only 1 cost value (1 per 100 iterations), but your code produced an array of 20 cost values meaning that you must have done 2000 iterations even though they asked for a smaller number.

1 Like

I added some print statements in my model function and optimize function to see the parameter values I’m getting at each level. Here’s what I see when I run the “model_test” cell:

model with num_iterations 50 learning_rate 0.01
before optimize w.shape (4, 1)
optimize with num_iterations 50 learning_rate 0.01
in model before predict call w.shape (4, 1)
predictions = [[1. 1. 0. 1. 0. 0. 1.]]
predictions = [[1. 1. 0.]]
All tests passed!
1 Like

I passed on ‘num_iterations=2000, learning_rate=0.5’ to optimize functions and got the error above. but if I delete them, I got another set of error ---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
in
1 from public_tests import *
2
----> 3 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
131 assert type(d[‘w’]) == np.ndarray, f"Wrong type for d[‘w’]. {type(d[‘w’])} != np.ndarray"
132 assert d[‘w’].shape == (X.shape[0], 1), f"Wrong shape for d[‘w’]. {d[‘w’].shape} != {(X.shape[0], 1)}"
→ 133 assert np.allclose(d[‘w’], expected_output[‘w’]), f"Wrong values for d[‘w’]. {d[‘w’]} != {expected_output[‘w’]}"
134
135 assert np.allclose(d[‘b’], expected_output[‘b’]), f"Wrong values for d[‘b’]. {d[‘b’]} != {expected_output[‘b’]}"

AssertionError: Wrong values for d[‘w’]. [[ 0.14449502]
[-0.1429235 ]
[-0.19867517]
[ 0.21265053]] != [[ 0.08639757]
[-0.08231268]
[-0.11798927]
[ 0.12866053]]

Not sure where I ‘hard-coded’, would you plz give more guidance? really appreciate!

1 Like

What is the learning rate that is requested by that test case?

Note that you can’t just delete those parameters on the call, because that’s another version of the “hard-coding” bug: it means you get the declared defaults in the definition of optimize.

The thing you need to understand is how “keyword” parameters work in python. If you’re new to python, try googling “python keyword parameters” and doing a bit of reading.

1 Like

Problem solved! thanks so much for your help!

4 Likes