Compute_gradient

I am struggling to get my results done, I followed the hints to make the code. it runs okay with initial_w and initial_b values but when i run it with non zero test values it gives following errors:

dj_db at test w and b: -0.5999999999991071
dj_dw at test w and b: [-44.831353617873795, -44.37384124953978]

AssertionError Traceback (most recent call last)
in
8
9 # UNIT TESTS
—> 10 compute_gradient_test(compute_gradient)

~/work/public_tests.py in compute_gradient_test(target)
51 dj_db, dj_dw = target(X, y, test_w, test_b)
52
—> 53 assert np.isclose(dj_db, 0.28936094), f"Wrong value for dj_db. Expected: {0.28936094} got: {dj_db}"
54 assert dj_dw.shape == test_w.shape, f"Wrong shape for dj_dw. Expected: {test_w.shape} got: {dj_dw.shape}"
55 assert np.allclose(dj_dw, [-0.11999166, 0.41498775, -0.71968405]), f"Wrong values for dj_dw. Got: {dj_dw}"

AssertionError: Wrong value for dj_db. Expected: 0.28936094 got: 0.2608532027100252

And further more when I run the subsequent cells of gradient descent with initial inputs of w, b and alpha, the output of iterations is nan just like this

Iteration 0: Cost nan
Iteration 1000: Cost nan
Iteration 2000: Cost nan
Iteration 3000: Cost nan
Iteration 4000: Cost nan
Iteration 5000: Cost nan
Iteration 6000: Cost nan
Iteration 7000: Cost nan
Iteration 8000: Cost nan
Iteration 9000: Cost nan
Iteration 9999: Cost nan

Can I have some guidance where I am making mistake?

There are errors in your compute gradient function.

Nan means not a number. Work on the gradients first.