Week 2 Logistic Regression with a Neural Network Mindset

I get this error message upon running model_test(model)

AssertionError Traceback (most recent call last)
in
----> 1 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
117 assert type(d[‘w’]) == np.ndarray, f"Wrong type for d[‘w’]. {type(d[‘w’])} != np.ndarray"
118 assert d[‘w’].shape == (X.shape[0], 1), f"Wrong shape for d[‘w’]. {d[‘w’].shape} != {(X.shape[0], 1)}"
→ 119 assert np.allclose(d[‘w’], expected_output[‘w’]), f"Wrong values for d[‘w’]. {d[‘w’]} != {expected_output[‘w’]}"
120
121 assert np.allclose(d[‘b’], expected_output[‘b’]), f"Wrong values for d[‘b’]. {d[‘b’]} != {expected_output[‘b’]}"

AssertionError: Wrong values for d[‘w’]. [[0.]
[0.]
[0.]
[0.]] != [[ 0.00194946]
[-0.0005046 ]
[ 0.00083111]
[ 0.00143207]]

The code I have under model is as follows:

w, b = initialize_with_zeros(X_train.shape[0])

parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations=100, learning_rate=0.009, print_cost=True)

Y_prediction_test = predict(w, b, X_test)

Y_prediction_train = predict(w, b, X_train)

Hi @shubhayan it seems that the value of dw in your case is all 0.0. Maybe there is something wrong in the forward/backward prop code in the propagate function ?

This is what I have for forward:

A = sigmoid(np.dot(w.T,X)+b)

cost = (-1./m)*(np.dot(Y,np.log(A).T)+np.dot(1-Y,np.log(1-A).T))

and Backward prop:

dw = (1./m)*np.dot(X,(A-Y).T)

db = (1./m)*np.sum(A-Y)

I wonder if I transposed wrong.

@shubhayan I would check the optimize function, in particular the update part of the function. Initially w is al 0. if that’s not changing maybe it is due to the fact that the update rule is not being applied properly.

Maybe you could print w before and after updating it in the optimize function so you can check whether it is being updated or not.

@shubhayan It really looks like a problem inside the optimize function. I passed the assignment and your propagate code above looks identical to mine. I also initially made a mistake while updating my parameters inside the optimize, so need to be careful there.

Secondly, if you have posted the actual code above from the model function, I think optimize should be called with actual arguments like this: optimize(w, b, X_train, Y_train, num_iterations=num_iterations, learning_rate=learning_rate) and not with hardcoded numbers. That could also be an issue. And I think this the issue because in the notebook we can not see the definition of the model_test function. For that you need to go to the hub by clicking File > Open and inspect the public_tests.py file. At around line number 111, the test calls the model function a.k.a target with some specific values for num_iterations, learning_rate parameters. Therefore if you hardcode those parameters inside your model function, your test should run into an anomaly. I think this is happening in this case.

This really mirrors my thoughts, I knew I was doing something wrong with hardcoding numbers into optimize(), when clearly the number of iterations and learning rate is pre-defined in model().

This is the update bit:

    w = w - learning_rate*dw
    
    b = b - learning_rate*db

Also this is from optimise test, once I’ve added print(w) :

[[0.9910139 ]
[1.97844435]]
[[0.98202891]
[1.95689227]]
[[0.97304514]
[1.93534406]]
[[0.96406267]
[1.91380001]]
[[0.95508162]
[1.89226047]]
[[0.94610209]
[1.87072579]]
[[0.9371242 ]
[1.84919636]]
[[0.92814809]
[1.8276726 ]]
[[0.9191739 ]
[1.80615495]]
[[0.91020178]
[1.7846439 ]]
[[0.90123189]
[1.76313998]]
[[0.89226441]
[1.74164374]]
[[0.88329953]
[1.72015578]]
[[0.87433745]
[1.69867677]]
[[0.8653784]
[1.6772074]]
[[0.8564226 ]
[1.65574841]]
[[0.84747032]
[1.63430063]]
[[0.83852182]
[1.61286492]]
[[0.8295774]
[1.5914422]]
[[0.82063737]
[1.57003349]]
[[0.81170206]
[1.54863985]]
[[0.80277185]
[1.52726243]]
[[0.79384712]
[1.50590247]]
[[0.78492827]
[1.48456127]]
[[0.77601575]
[1.46324025]]
[[0.76711004]
[1.44194091]]
[[0.75821163]
[1.42066486]]
[[0.74932107]
[1.3994138 ]]
[[0.74043893]
[1.37818956]]
[[0.73156581]
[1.35699407]]
[[0.72270237]
[1.3358294 ]]
[[0.71384929]
[1.31469773]]
[[0.7050073 ]
[1.29360136]]
[[0.69617716]
[1.27254275]]
[[0.6873597 ]
[1.25152448]]
[[0.67855576]
[1.23054927]]
[[0.66976626]
[1.20962 ]]
[[0.66099213]
[1.18873966]]
[[0.65223437]
[1.1679114 ]]
[[0.64349402]
[1.14713854]]
[[0.63477217]
[1.12642449]]
[[0.62606993]
[1.10577283]]
[[0.61738849]
[1.08518728]]
[[0.60872904]
[1.06467165]]
[[0.60009284]
[1.0442299 ]]
[[0.59148117]
[1.02386609]]
[[0.58289535]
[1.00358438]]
[[0.57433671]
[0.98338899]]
[[0.56580663]
[0.96328425]]
[[0.55730648]
[0.9432745 ]]
[[0.54883766]
[0.92336413]]
[[0.54040156]
[0.90355755]]
[[0.5319996 ]
[0.88385913]]
[[0.52363314]
[0.86427324]]
[[0.51530358]
[0.84480417]]
[[0.50701225]
[0.82545613]]
[[0.49876049]
[0.80623323]]
[[0.49054957]
[0.78713946]]
[[0.48238072]
[0.76817862]]
[[0.47425514]
[0.74935436]]
[[0.46617394]
[0.73067013]]
[[0.45813819]
[0.71212917]]
[[0.45014889]
[0.69373445]]
[[0.44220694]
[0.67548873]]
[[0.4343132 ]
[0.65739449]]
[[0.42646842]
[0.63945395]]
[[0.41867328]
[0.62166903]]
[[0.41092838]
[0.60404141]]
[[0.40323423]
[0.58657245]]
[[0.39559128]
[0.56926327]]
[[0.38799988]
[0.55211472]]
[[0.38046031]
[0.53512737]]
[[0.3729728 ]
[0.51830155]]
[[0.36553748]
[0.50163738]]
[[0.35815445]
[0.48513475]]
[[0.35082374]
[0.46879334]]
[[0.34354533]
[0.45261267]]
[[0.33631917]
[0.43659209]]
[[0.32914516]
[0.42073081]]
[[0.3220232 ]
[0.40502793]]
[[0.31495312]
[0.38948245]]
[[0.30793478]
[0.37409328]]
[[0.30096802]
[0.3588593 ]]
[[0.29405266]
[0.34377933]]
[[0.28718855]
[0.32885219]]
[[0.28037553]
[0.31407667]]
[[0.27361347]
[0.29945161]]
[[0.26690224]
[0.28497586]]
[[0.26024177]
[0.27064833]]
[[0.253632 ]
[0.25646797]]
[[0.24707288]
[0.24243382]]
[[0.24056443]
[0.22854499]]
[[0.23410671]
[0.21480068]]
[[0.22769979]
[0.20120019]]
[[0.22134382]
[0.18774292]]
[[0.21503897]
[0.17442838]]
[[0.20878546]
[0.16125618]]
[[0.20258356]
[0.14822607]]
[[0.19643359]
[0.13533789]]
[[0.19033591]
[0.12259159]]
w = [[0.19033591]
[0.12259159]]
b = 1.9253598300845747
dw = [[0.67752042]
[1.41625495]]
db = 0.21919450454067652
Costs = [array(5.80154532)]
[[0.9001544 ]
[1.76049276]]
[[0.80051919]
[1.52165666]]
[[0.70137228]
[1.28437737]]
[[0.60332088]
[1.05058774]]
[[0.50755253]
[0.82405932]]
[[0.41590982]
[0.61061081]]
[[0.33023135]
[0.41593066]]
[[0.25129604]
[0.24207694]]
[[0.1789759]
[0.0877943]]
[[ 0.11336029]
[-0.04805906]]
[[ 0.0552938 ]
[-0.16475556]]
[[ 0.0058926 ]
[-0.26067602]]
[[-0.03446091]
[-0.33555634]]
[[-0.06659309]
[-0.39157872]]
[[-0.09207514]
[-0.43242267]]
[[-0.11254053]
[-0.46182832]]
[[-0.12933962]
[-0.48287181]]
[[-0.1434854 ]
[-0.49786076]]
[[-0.15571009]
[-0.50846591]]
[[-0.16653761]
[-0.51588505]]
[[-0.17634326]
[-0.5209779 ]]
[[-0.18539736]
[-0.5243646 ]]
[[-0.19389579]
[-0.5264952 ]]
[[-0.20198137]
[-0.52769811]]
[[-0.20975874]
[-0.5282142 ]]
[[-0.21730488]
[-0.52822079]]
[[-0.22467667]
[-0.52784887]]
[[-0.23191616]
[-0.52719545]]
[[-0.23905459]
[-0.52633248]]
[[-0.24611515]
[-0.52531348]]
[[-0.2531151 ]
[-0.52417825]]
[[-0.26006731]
[-0.52295644]]
[[-0.26698135]
[-0.52167016]]
[[-0.27386439]
[-0.5203359 ]]
[[-0.28072177]
[-0.51896598]]
[[-0.28755748]
[-0.5175696 ]]
[[-0.2943745 ]
[-0.51615363]]
[[-0.30117506]
[-0.51472322]]
[[-0.30796085]
[-0.5132822 ]]
[[-0.31473312]
[-0.51183346]]
[[-0.3214928 ]
[-0.51037915]]
[[-0.32824061]
[-0.50892089]]
[[-0.33497707]
[-0.50745989]]
[[-0.3417026 ]
[-0.50599706]]
[[-0.34841748]
[-0.50453309]]
[[-0.35512197]
[-0.5030685 ]]
[[-0.36181622]
[-0.50160366]]
[[-0.36850038]
[-0.50013888]]
[[-0.37517456]
[-0.49867437]]
[[-0.38183883]
[-0.49721032]]
[[-0.38849327]
[-0.49574684]]
[[-0.39513791]
[-0.49428404]]
[[-0.40177282]
[-0.492822 ]]
[[-0.40839801]
[-0.49136077]]
[[-0.41501353]
[-0.48990041]]
[[-0.42161939]
[-0.48844095]]
[[-0.42821563]
[-0.48698242]]
[[-0.43480226]
[-0.48552485]]
[[-0.44137931]
[-0.48406827]]
[[-0.44794678]
[-0.48261268]]
[[-0.4545047]
[-0.4811581]]
[[-0.46105309]
[-0.47970456]]
[[-0.46759196]
[-0.47825205]]
[[-0.47412132]
[-0.4768006 ]]
[[-0.4806412 ]
[-0.47535021]]
[[-0.48715161]
[-0.4739009 ]]
[[-0.49365256]
[-0.47245266]]
[[-0.50014407]
[-0.47100552]]
[[-0.50662615]
[-0.46955948]]
[[-0.51309882]
[-0.46811455]]
[[-0.5195621 ]
[-0.46667074]]
[[-0.526016 ]
[-0.46522805]]
[[-0.53246053]
[-0.46378649]]
[[-0.53889571]
[-0.46234607]]
[[-0.54532155]
[-0.4609068 ]]
[[-0.55173808]
[-0.45946868]]
[[-0.5581453 ]
[-0.45803172]]
[[-0.56454324]
[-0.45659593]]
[[-0.5709319 ]
[-0.45516132]]
[[-0.5773113 ]
[-0.45372788]]
[[-0.58368146]
[-0.45229563]]
[[-0.5900424 ]
[-0.45086458]]
[[-0.59639412]
[-0.44943472]]
[[-0.60273664]
[-0.44800607]]
[[-0.60906999]
[-0.44657863]]
[[-0.61539417]
[-0.4451524 ]]
[[-0.6217092]
[-0.4437274]]
[[-0.6280151 ]
[-0.44230362]]
[[-0.63431189]
[-0.44088108]]
[[-0.64059957]
[-0.43945978]]
[[-0.64687816]
[-0.43803972]]
[[-0.65314768]
[-0.43662091]]
[[-0.65940815]
[-0.43520335]]
[[-0.66565958]
[-0.43378705]]
[[-0.67190199]
[-0.43237202]]
[[-0.67813539]
[-0.43095825]]
[[-0.6843598 ]
[-0.42954575]]
[[-0.69057523]
[-0.42813454]]
[[-0.6967817]
[-0.4267246]]
[[-0.70297923]
[-0.42531595]]
[[-0.70916784]
[-0.42390859]]

Blockquote

@albertovilla @crisrise @chandan1986.sarkar

UPDATE: I thought I’d roll with the punches and ended up curiously running logistic_regression_model = model(), and it seems to run[even though model_test(model) is failing]…

Cost after iteration 0: 0.693147
Cost after iteration 100: 0.584508
Cost after iteration 200: 0.466949
Cost after iteration 300: 0.376007
Cost after iteration 400: 0.331463
Cost after iteration 500: 0.303273
Cost after iteration 600: 0.279880
Cost after iteration 700: 0.260042
Cost after iteration 800: 0.242941
Cost after iteration 900: 0.228004
Cost after iteration 1000: 0.214820
Cost after iteration 1100: 0.203078
Cost after iteration 1200: 0.192544
Cost after iteration 1300: 0.183033
Cost after iteration 1400: 0.174399
Cost after iteration 1500: 0.166521
Cost after iteration 1600: 0.159305
Cost after iteration 1700: 0.152667
Cost after iteration 1800: 0.146542
Cost after iteration 1900: 0.140872
train accuracy: 65.55023923444976 %
test accuracy: 34.0 %

Any suggestions before I submit this? I have just 2 hours to my deadline :frowning:

@shubhayan what’s the current error log now? Did you remove the hardcoded numbers when calling the optimize function as suggested by @chandan1986.sarkar? The function should be called with the input parameters to the function model and not any specific values.

It is exactly the same,

Cost after iteration 0: 0.693147

AssertionError Traceback (most recent call last)
in
----> 1 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
117 assert type(d[‘w’]) == np.ndarray, f"Wrong type for d[‘w’]. {type(d[‘w’])} != np.ndarray"
118 assert d[‘w’].shape == (X.shape[0], 1), f"Wrong shape for d[‘w’]. {d[‘w’].shape} != {(X.shape[0], 1)}"
→ 119 assert np.allclose(d[‘w’], expected_output[‘w’]), f"Wrong values for d[‘w’]. {d[‘w’]} != {expected_output[‘w’]}"
120
121 assert np.allclose(d[‘b’], expected_output[‘b’]), f"Wrong values for d[‘b’]. {d[‘b’]} != {expected_output[‘b’]}"

AssertionError: Wrong values for d[‘w’]. [[0.]
[0.]
[0.]
[0.]] != [[ 0.00194946]
[-0.0005046 ]
[ 0.00083111]
[ 0.00143207]]

This is what my update to the required code under model looks like now after @chandan1986.sarkar 's suggestions…

w, b = initialize_with_zeros(X_train.shape[0])

parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations=num_iterations, learning_rate=learning_rate, print_cost=True)

Y_prediction_test = predict(w, b, X_test)

Y_prediction_train = predict(w, b, X_train)

Does the d['w'] value match with your print outs of w? I mean, your output value [[0.] [0.] [0.] [0.]] appears in the print out when you execute your code with the model_test function?

I tried adding print(str(d[‘w’])) before return d at the end of model()

model_test(model) now returns a zero column vector:

Cost after iteration 0: 0.693147
[[0.]
[0.]
[0.]
[0.]]

AssertionError Traceback (most recent call last)
in
----> 1 model_test(model)

this perhaps means something is wrong in the update stage of optimize(), this still does not explain how optimize_test gave no errors.

After calling the optimize function in model there is an statement to update w and b, could you try printing out w before and after that statement. If w is all zeros it looks like this variable is not being updated because it is initialized with zeros.

1 Like

What a genius I am! :neutral_face:

Literally forgot to add these two lines of code:

w = parameters['w']

b = parameters['b']

Thanks a lot! :sweat_smile: It’s fixed now!

Cost after iteration 0: 0.693147
[[ 0.00194946]
[-0.0005046 ]
[ 0.00083111]
[ 0.00143207]]
All tests passed!

2 Likes

I’m glad it is solved now :slight_smile:

2 Likes

Hi @albertovilla

With regards to this same issue, model_test(model) fails with the following error at my end: -


TypeError Traceback (most recent call last)
in
----> 1 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
109 y_test = np.array([1, 0, 1])
110
→ 111 d = target(X, Y, x_test, y_test, num_iterations=50, learning_rate=1e-4)
112
113 assert type(d[‘costs’]) == list, f"Wrong type for d[‘costs’]. {type(d[‘costs’])} != list"

TypeError: model() missing 1 required positional argument: ‘print_cost’

I am unable to understand why this error is flashed and what does it mean.

However, when I ignore this error and proceed to run the model with the below mentioned function call I get an accuracy of 70%: -

logistic_regression_model = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations=2000, learning_rate=0.005, print_cost=True)

Cost after iteration 0: 0.693147
Cost after iteration 100: 0.584508
Cost after iteration 200: 0.466949
Cost after iteration 300: 0.376007
Cost after iteration 400: 0.331463
Cost after iteration 500: 0.303273
Cost after iteration 600: 0.279880
Cost after iteration 700: 0.260042
Cost after iteration 800: 0.242941
Cost after iteration 900: 0.228004
Cost after iteration 1000: 0.214820
Cost after iteration 1100: 0.203078
Cost after iteration 1200: 0.192544
Cost after iteration 1300: 0.183033
Cost after iteration 1400: 0.174399
Cost after iteration 1500: 0.166521
Cost after iteration 1600: 0.159305
Cost after iteration 1700: 0.152667
Cost after iteration 1800: 0.146542
Cost after iteration 1900: 0.140872
train accuracy: 99.04306220095694 %
test accuracy: 70.0 %

Could you please help?

@albertovilla

On further analysis, it appears there is an issue in the test file “public_tests.py”.

It is missing “print_cost=True” as a last parameter in the below call: -
d = target(X, Y, x_test, y_test, num_iterations=50, learning_rate=1e-4)

After modifying this line in the test file to: -
d = target(X, Y, x_test, y_test, num_iterations=50, learning_rate=1e-4, print_cost=True)

I get the below results on executing “model_test(model)”: -
Cost after iteration 0: 0.693147
train accuracy: 66.66666666666667 %
test accuracy: 66.66666666666667 %
All tests passed!

Subsequent call to logistic_regression_model = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations=2000, learning_rate=0.005, print_cost=True) gives me the same results as above post.

Overall, I think the issue is resolved but on submission of the assignment it still shows as failed.
Please check and let me know if something needs to be done at my end. Thanks!

I think refrain from changing anything in the test file to begin with :sweat_smile:

Also are you calling back w and b from the “parameters” dictionary after implementing optimize under the model(…) function?

I would think so too to refrain from changing the test file. But in this case, the issue seems to be from a function call with inadequate parameters when the function expects it. The model function is just a consolidation of calls to all previous functions that have run correctly.

w and b are updated from the dictionary.

Did you not get this error?