So I read that classification can be solved with linear model and in most cases non linear (sigmoid).
If the problem can be solved by linear model and there are very small differences between these classes, we can use SVMs, right?
Also sigmoid is good for fitting datasets like this
Not exactly. The model for classification isn’t linear, because it includes a non-linear activation function.
Sigmoid is a poor choice for a general linear model, because it will only fit specific sets of data.
This function in the image is not a linear regression, because line of best fit is curved.
We can actually split the two classes with horizontal line, but where to find the offset of this line y=b, is what sigmoid is used? Right?
The standard value is that you split at >= 0.5.
There is almost never a good reason to use any other value, because it is midway between the two limiting values of the sigmoid function.
I found this text in the C2_W3 assignment
In one of the labs this week, you trained a neural network with a single perceptron, performing forward and backward propagation. That simple structure was enough to solve a “linear” classification problem - finding a straight line in a plane that would serve as a decision boundary to separate two classes.
I know that linear regressor is also called classifier.