Stuck on compute_gradient function

I have been working on this function for hours but haven’t been able to get dw right. I have used different methods (2 for loops and also with the dot function), but I haven’t been able to figure out where I am wrong.

My output is

dj_db at test w and b: -0.5999999999991071
dj_dw at test w and b: [-44.6025974337068, -44.6025974337068]

Did you try looking at the hints given below the grader cell. It really helps you find the correct code.

Refer this C1_W3_Logistic_Regression UNQ_C5 UNQ_C6 ERROR PLEASE HELP - #3 by Deepti_Prasad

If you still having trouble, let me know.


Yes, I did check the hint code below and tried different variations. By the way, I am talking about the function compute_gradient, not compute_gradient_reg.
# UNQ_C3
# GRADED FUNCTION: compute_gradient

Could I send you my notebook privately?

Yes please share the notebook via personal DM

Hello Gerardo,

Your code for C2 is incorrect/incomplete. I am sharing screenshot of the hints which is mentioned just below the grader cell to make your code correct.


GRADED FUNCTION: compute_cost

def compute_cost(X, y, w, b, *argv):

If you are still running into error, let me know.

Remember the loss_sum is the label in a labeled example. Since this is logistic regression, every value of must either be 0 or 1. In your case you have not initialise loss_sum=0. See the hint images, you will understand.

Also for C3 where you have mentioned f_wb = sigmoid(z_wb) is incorrect as you have initialise z_wb to 0 in the initial line of code

Did you read this before C3 cell

So here for
f_wb apply sigmoid function to the both parameters with the number of examples and while applying sigmoid function remember this hint
As you are doing this, remember that the variables X_train and y_train are not scalar values but matrices of shape ( 𝑚,𝑛 ) and ( 𝑚 ,1) respectively, where 𝑛 is the number of features and 𝑚 is the number of training examples.

  1. This code is incorrect. Use hint section to make this code one line.
    for j in range(n):
    dj_dw_ij = (f_wb - y[i])* X[i][j]
    dj_dw += dj_dw_ij

  2. At the end divide dj_db and dj_dw by total number of examples
    dj_dw = dj_dw / m
    dj_db = dj_db / m


Hi! Thanks for your response :slight_smile:
I don’t understand why f_wb = sigmoid(z_wb) is wrong. In my code, I iterate over all the features using the nested for loop and sum all the values. So, when I do f_wb = sigmoid(z_wb), z_wb is including both features of the training example.
Actually, I am using the hint in Hint to calculate f_wbMore hints to calculate f_wb

Because calculation for z_wb for computation cost differs
z_wb_ij = w[j]*X[i][j]

where as for gradient
z_wb += X[i, j] * w[j]

that’s why notice + sign with = it means z_wb gets adds with this parameters and gets update to the next z_wb where again z_wb += b adds with bias

so you need to apply the sigmoid to the parameters in a different way where in the update parameter X is tuple with i range and w which further gets added here with one more bias. (I explained the whole code for f_wb :rofl:

Hope you got this!!

Did you make correction for 3 and 4 points??


Omg, I finally could do it! Haha. The error was also in assigning the partial derivative of J over b. It works now.
Thank you so much for your assistance and patience :laughing:

Good you found an extra error and corrected yourself, happy to help.

Keep Learning!!!