Running the test, I’m getting the following assertion error:
dj_db: 0.07138288792343662
First few elements of regularized dj_dw:
[-0.010034426775559609, 0.01047627452123708, 0.055745505756197467, 0.003977852112093805]
AssertionError Traceback (most recent call last)
in
11
12 # UNIT TESTS
—> 13 compute_gradient_reg_test(compute_gradient_reg)
~/work/public_tests.py in compute_gradient_reg_test(target)
123
124 assert np.isclose(dj_db, expected1[0]), f"Wrong dj_db. Expected: {expected1[0]} got: {dj_db}"
→ 125 assert np.allclose(dj_dw, expected1[1]), f"Wrong dj_dw. Expected: {expected1[1]} got: {dj_dw}"
126
127
AssertionError: Wrong dj_dw. Expected: [ 0.19530838 -0.00632206 0.19687367 0.15741161 0.02791437] got: [0.17210345 0.00241732 0.20441898 0.17273973 0.02791437]
My code includes the following lines (as per hint given for this exercise:
Loop over the elements of w
for j in range(n):
dj_dw_j_reg = (lambda_ / m) * w[j]
Add the regularization term to the correspoding element of dj_dw
dj_dw[j] = dj_dw[j] + dj_dw_j_reg
### END CODE HERE ###
return dj_db, dj_dw
It seems that dj_dw is not updated for each iteration of j. What am I missing. Any suggestions will be much appreciated.