Inconsistency in Logistic Regression Cost Function

In /notebooks/C1_W3_Lab06_Gradient_Descent_Soln.ipynb in the fourth cell (Markdown) the formula for the loss function differs between the markdown and the screenshot, where the screenshot shows the 1/2 inside the parenthesis, whereas the markdown one doesn’t have it.

Hello @pritamdodeja

Please ignore the screenshot on the right.

The markdown shows the correct formula - The formula shown is for
\frac{\partial J(w,b)}{\partial w_j} and \frac{\partial J(w,b)}{\partial b} and so it should not have the \frac{1}{2}

Hello Shanup,
Could you please elaborate what happens to this 1/2? Even in the video at one point Andrew talks about it but later we do not see it in the formulas. So wanted to understand how we get rid of this 1/2? Please help understand.

Hello @Devi5

The \frac {1} {2} appears in the Cost function for Linear Regression with the Squared Error Loss - We introduce it as a mathematical convenience so that when we take the derivative it will get cancelled out. We do not introduce the \frac {1} {2} in the case of Cost function for Logistic Regression, because we do not use the Squared Error loss function here.

Going back to the case of Linear Regression with Squared Error Loss, we define the Cost function as follows:

J(\vec{w},b) = \frac {1} {2m} \sum_{i=1}^{m} (f_{(\vec{w},b)}(\vec{x}^{(i)}) - y^{(i)})^{2}

When we take the Gradient, the \frac {1} {2} gets cancelled and we get:

\frac {\partial {J}} {\partial {w_j}} = \frac {1} {m} \sum_{i=1}^{m} (f_{(\vec{w},b)}(\vec{x}^{(i)}) - y^{(i)}).x_j^{(i)}

\frac {\partial {J}} {\partial {b}} = \frac {1} {m} \sum_{i=1}^{m} (f_{(\vec{w},b)}(\vec{x}^{(i)}) - y^{(i)})

To summarize, we introduce \frac {1} {2} while defining the cost function for Linear Regression with Squared Error Loss, and it vanishes when we take the gradient.

Thank you Shanup. This helps !!