Why isn't the actual loss function for logistic regression not put in place of cost function while implementing gradient descent? Shouldn't the cost function containing the log function be partially differentiated?
Yes, J is the logistic cost, and yes, it needs to be differentiated. I think these are more important than how the slide was designed.
But if the logistic cost is differentiated, we don’t get the expression in the uploaded slide ,right? How is that correct?
I remember the differentiation results were shown in at least one of the slides. Would you like to check again?
this one? I still don’t find the cost function being plugged in for differentiation. If the cost function for logistic regression isn’t there, then, this gradient descent and the gradient descent algorithm for linear regression has no difference. I still don’t get it…
Yes, they happen to look the same. You may derive it yourself, or you may read this post.
They look the same, if you grant that the f_wb equations are different - one includes sigmoid, the other does not.
Yeah I was also amazed when I saw that its same for both, except the value of f(x)
Nice work Arisha @Arisha_Prasain!