Problems with wk2 assignment Logistic_Regression_with_a_Neural_Network_mindset

My first post so hopefully I’m not doing anything wrong
I’m having an issue with the week2 assignment that is holding me up and I’m wondering if there is a mentor who might be able to advise.
1st. the costs.append(cost) is resulting in costs containing [array(0.15900538)] which is not correct. I corrected this by modifying costs.append(cost.item()) - not sure if I should be doing that but it seems the only way forward.
Then I end up with error:
Wrong values for costs. [5.801545319394553, nan] != [5.80154532, 0.31057104]

cost seems to steady increase with each iteration, into infinite then NaN.
But the propogation function passes the test so I’m not sure what went wrong here…
my labid is zmjtgudrtnsu

The mentors do not have the superpower to directly examine your notebooks. If the cost is rising, one thing to check is your “update parameters” logic in the optimize function. E.g. are you sure you are subtracting as opposed to adding the gradient values? If your propagate function passes the tests, then the gradient values are likely correct, but you still have to use them appropriately in order for things to work.

Also in terms of how to get advice, it helps to just show us your output. Not the code please, but the actual output you get when you run whichever test it is that is failing. Either take a screenshot or “copy/paste” the output and use the {} formatting tool so that it doesn’t get interpreted as “markdown”.

Thx @paulinpaloalto

I figured the costs rising bit. That’s resolved - i’m now subtracting :slight_smile:
Now its just figuring out why the optimize results dont match
here’s my logs for the results of optimize_test(optimize):

w=[[1.] [2.]] 
X=[[ 1.   2.  -1. ] [ 3.   4.  -3.2]] 
Y=[[1 0 1]]
b=2.0 dw=[[0.99845601]
 [2.39507239]] db=0.001455578136784208
w=[[ 0.00154399]
b=1.9985444218632158 dw=[[0.31256033]
 [0.53760634]] db=0.08667524310057347
 [ 0.3652905 ]]
b=4.720219620240163 dw=[[ 0.02299226]
 [-0.00572609]] db=-0.014683473586839565
 [ 0.37101659]]
b=4.734903093827003 dw=[[ 0.02282139]
 [-0.00568395]] db=-0.01457075291000504
w=[[-4.1712351 ]
 [ 0.37670054]]
b=4.749473846737009 dw=[[ 0.02265284]
 [-0.00564237]] db=-0.014459645535353513
AssertionError                            Traceback (most recent call last)
<ipython-input-135-3483159b4470> in <module>
      7 print("Costs = " + str(costs))
----> 9 optimize_test(optimize)

~/work/release/W2A2/ in optimize_test(target)
     73     assert type(costs) == list, "Wrong type for costs. It must be a list"
     74     assert len(costs) == 2, f"Wrong length for costs. {len(costs)} != 2"
---> 75     assert np.allclose(costs, expected_cost), f"Wrong values for costs. {costs} != {expected_cost}"
     77     assert type(grads['dw']) == np.ndarray, f"Wrong type for grads['dw']. {type(grads['dw'])} != np.ndarray"

AssertionError: Wrong values for costs. [5.801545319394553, 0.09466104077808124] != [5.80154532, 0.31057104]

seems 1st value is just rounding, but 2nd is quite off

ugh. sorry for the bold…

Are you sure that you didn’t over-ride any of the learning rate values?

b=2.0 dw=[[0.99845601]
 [2.39507239]] db=0.001455578136784208
w=[[-4.1712351 ]
 [ 0.37670054]]
b=4.749473846737009 dw=[[ 0.02265284]
 [-0.00564237]] db=-0.014459645535353513

so no this didnt change. 
But you are right - looking at the code, I dont see learning_rate being used in any formula... digging into where I missed this now...

Got it! All tests passed!

Thanks @paulinpaloalto!
Anytime you are up in the PNW (vancouver or seattle), let me know - I owe you a pint!

That’s great to hear that you found the solution with just a very few sketchy hints from me.

FWIW I instrumented my code in a similar way and here is the output I got for the first few iterations:

optimize with num_iterations 100 learning_rate 0.009
Before iteration 0
w [[1.]
b 1.5
dw [[ 0.25071532]
db -0.1250040450043965
Before iteration 1
w [[0.99774356]
b 1.5011250364050395
dw [[ 0.24978485]
db -0.12453807201660329
Before iteration 2
w [[0.9954955 ]
b 1.502245879053189
dw [[ 0.24885918]
db -0.12407450104862802
Before iteration 3
w [[0.99325577]
b 1.5033625495626266
dw [[ 0.24793829]
db -0.12361332224003391

BTW it looks like you modified the one test case. Your initial b value is different than what I’m seeing.

Oh I gave the values for the 2nd test - optimize_test(optimize) as that was what was comparing against expected results and throwing the asserts

I’m heading out for a few then will attempt to complete the rest of this assignment tonight - wish me luck!

Thanks for the sketchy hints - they were IMMENSELY helpful :smile:

1 Like

Ah, good point. Here is my output for the second hidden optimize_test:

optimize with num_iterations 101 learning_rate 0.1
Before iteration 0
w [[1.]
b 2.0
dw [[0.99845601]
db 0.001455578136784208
Before iteration 1
w [[0.9001544 ]
b 1.9998544421863216
dw [[0.99635211]
db 0.0034418659290942713
Before iteration 2
w [[0.80051919]
b 1.9995102555934121
dw [[0.99146907]
db 0.008047111114924732
Before iteration 3
w [[0.70137228]
b 1.9987055444819197
dw [[0.98051403]
db 0.018341316891314092
Before iteration 4
w [[0.60332088]
b 1.9968714127927882
dw [[0.95768351]
db 0.03959817811041706

I’m also UTC -7, so we should be on a similar schedule. Good luck on the rest of it and let us know how it goes!


1 Like

Assignment submitted. The rest was pretty straightforward. Just have to be super diligent to get each function correct. Thanks so much for the hints. They pointed me to the right direction.
Have a great evening.


That’s great that you got through the rest of the assignment pretty quickly. That’s the first real assignment in the course. Lots more interesting material ahead. Onward! :nerd_face: