In this lab, I am having trouble with understanding how the gradient descent function for both logistic and linear regression connects to computing the cost of logistic and linear regression. In previous labs, the way gradient descent equation connects to computing the cost is that the cost function is computed within the gradient descent function. However, with this lab, it is not and I do not know why.
Hello @Nathan_Angell,
Can you share the name of the “previous lab”? I want to have a look and see how it connected them, and then compare it to the regularization lab. Thanks.
It may be true that’s what the code did, but it doesn’t mean that the gradient descent process is directly connected to computing the cost.
That’s exactly backwards.
- You start with the cost equation.
- Then you use calculus to obtain the equation for the gradients.
- When you have the gradients, you can iteratively use them to perform gradient descent.
Since these two cost functions (linear and logistic) are both known to be convex, the gradients will lead you “downhill” to the minimum cost.
The only reason for computing the cost during gradient descent is to make a nice graph of the progress of minimizing the cost. This is handy for debugging your code in case it isn’t working.
But there is no absolute need to compute the cost during gradient descent.
Here is the optional lab: Machine-Learning-Specialization-Coursera/C1_W3_Lab09_Regularization_Soln.ipynb at main · greyhatguy007/Machine-Learning-Specialization-Coursera · GitHub
yeah that makes sense, I understand what you mean