Logistic Regression's boundary. Week 2

Hello leaners and mentors,

I want my opinion to be confirmed which is about the boundary line of logistic regression. Is it the straight line? → so the hypothesis decide output 0 or 1 is a linear regression function. Can I say so?

In the logistic regression model we have implemented has one layer and one unit so it is a perceptron. And a perceptron just can learn linear problem despite whatever the activation function is (sigmoid - non linear in this case)

It’s quite confusing. If we consider our model use to solve a regression problem which output a continuous value in range [0,1] then it can learn non-linear problem. But when apply this into binary classification → its output turn linear (the boudary line)

Hope someone make me clear. Thank you.

Keeping it simple, suppose that your dataset has two features: x_1 and x_2. In a binary classification task using logistic regression, you are predicting a probability \hat{y}. If \hat{y}\geq 0.5, then we predict that y belongs to one of the two possible classes (e.g. an image of a cat). If \hat{y} < 0.5, then we predict that it belongs the other class (e.g. not-cat).

Note that the logistic regression, in general, can be written as follows:

\log\left(\frac{y}{1-y} \right) = \mathrm{w^T} x + \mathrm{b}.

Or, as in the present 2-feature example,

\log\left(\frac{y}{1-y} \right) = \mathrm{w}_1 x_1 + \mathrm{w}_2 x_2 + \mathrm{b}.

But, don’t trust me on this; do the algebra and verify! :smiley: Then observe that for y equal to any constant (between zero and one), the expression is clearly linear. For y = 0.5, it becomes

0 = \mathrm{w}_1 x_1 + \mathrm{w}_2 x_2 + \mathrm{b}.

Plotted in x_1 \mbox{-} x_2 space, it is visualized as the decision boundary separating the example predicted to be of one class, from those in the other.

Hope this helps!