Gradient Descent double loop, is it correct?

Hello everyone,

In the the LAB 06, Gradient Descent for Logistic Regresssion, is this double loop correct ? I believe we should loop on the j first, and then do an internal loop for i because we need to calculate the derivative of the J function first which is a sum of errors and then conclude the dJ_dw

image

The double loop is correct. The derivative of the J function is a sum of weighted errors (not just errors) and in order to obtain the weights (for the wieghted errors) you use a loop with respect to j (the index of the features) because they are in turn the target y minus a weighted sum of the features X[i, j] by the weights w[j].

1 Like