I have been trying to debug this code from two days. Can someone please help me with this.

Hello @J_Jeslin! Welcome!

Code debugging can be challenging but it’s also part of any one’s programming life, and I think the key is to establish a way for ourselves to debug codes!

I suggest you to modify your function by adding printing lines to print the content of each variable used, and then run your modified function with some simple inputs. Please check out this post for a detailed example on how to do this. These steps help you verify the program is progressing up to your expectation.

Good luck @J_Jeslin!

Raymond

Hi,

I have kind of the same issue here, with asserion error a bit different.

I have tried debugging the code, as suggested by @rmwkwok. To do so I created a very simple example so that I could do the calculations by hand, where:

X_test = np.array([[10, 8], [4,6]])

y_test = np.array([1, 0])

w_test = X_test.shape[1]*[1]

b_test = 0.

I did the computations by hand, and then ran my code with this example. I got the same result, hence it does not seem to be a code problem, perhaps a conceptual problem?!

Here are the results I got for the intermediary steps for the function compute_cost, which is passing the tests:

z: [18. 10.]

sigmoid f_X: [0.99999998 0.9999546 ]

y[0]: 1

f_X[0]: 0.9999999847700205

loss 0: 1.5229979615740706e-08

y[1]: 0

f_X[1]: 0.9999546021312976

loss 1: 10.000045398900186

total_cost: 5.0000227070650825

and here are the results for the function compute_gradient whichis NOT passing:

z_wb[0]: 18.0

sigmoid f_wb: 0.9999999847700205

dj_db: -1.5229979499764568e-08

dj_dw[0]: -1.5229979499764568e-07

dj_dw[1]: -1.2183983599811654e-07

z_wb[1]: 10.0

sigmoid f_wb: 0.9999546021312976

dj_db: 0.9999545869013181

dj_dw[0]: 3.9998184085251904

dj_dw[1]: 5.999727612787786

dj_db, dj_dw: (0.49997729345065905, array([1.9999092 , 2.99986381]))

one remark, I used dot product to compute z in this function, just like I have used in compute_cost function. I think this should be okay right?!

I also am very confused regarding the given code structure. Because if in fact I can use the dot product and use a vectorized form, why would I have a for loop for j? This is the part of the code structure that is confusing me, so as u can see I commented it out:

#for j in range(n):

#z_wb += None

```
#z_wb += None
```

Could anyone give me a helping hand? Pretty stuck here…

Hello @Thais,

Only `compute_gradient`

does not pass, right? Can you use this set of input parameters on `compute_gradient`

, and verify that each printed result is as expected?

```
X_test = np.array([[0.1, 0.3], [0.2, -0.3]])
y_test = np.array([1, 0])
w_test = np.array([1., -1.])
b_test = 0.2
```

Using dot product and vectorization is fine. You do not have to use the loop approach. The loop approach is the basic approach for those who are not familiar with vectorization.

Raymond

PS: I am suggesting for a different set of input because your samples are a bit too extreme for logistic regression.

Hi! Thank you so much for the quick reply! Yes, compute cost has passed, I just had a problem with compute gradient. I decided to debug again with your example, and I just figured out what the problem was. Basically I was just missing this line of code:

```
dj_dw[j] += dj_dw_ij
```

So, I just forgot to accumulate the computations for dj_dw. Sorry for the confusion! And thx a lot for the help