compute_gradient_test
Compute and display cost and gradient with non-zero w
test_w = np.array([0.2, 0.-5])
test_b = -24
dj_db, dj_dw = compute_gradient(X_train, y_train, test_w, test_b)
print(‘dj_db at test_w:’, dj_db)
print(‘dj_dw at test_w:’, dj_dw.tolist())
My answer is this
dj_db at test_w: -0.6
dj_dw at test_w: [-44.83135361795273, -44.37384124957207]
But expected outcome is
dj_db at initial w (zeros) |
-0.5999999999991071 |
ddj_dw at initial w (zeros): |
[-44.8313536178737957, -44.37384124953978] |
error is AssertionError: Wrong value for dj_db. Expected: 0.28936094 got: 0.4225095475509334.
TMosh
2
Greetings.
It appears that your code doesn’t work correctly. Please review your code and verify that it implements the equation for dj_db correctly.
A very common problem is using the wrong indentation within any for-loops. Python is very particular about indentation.
Thanks for the input. But my value of Compute and display gradient with w initialized to zeroes match with the expected value given.
TMosh
4
That does not mean your code is correct.
It only means your code works correctly when w and b are zero.
Thanks. Any hint that can be helpful
TMosh
6
Already provided - check your code carefully against the equation you’re implementing.