C1_W2_Logistic Regression Video_Doubt about Linear Regression

Hi, Darsh Gandhi.

While doing a linear regression, we usually find parameters in the form: a1, a2…till a(n) such that they define the n-dimensional plan as: a1 x1 + a2 x2 + a3 x3 +…+ an xn + b = 0. This fits the n-dimensional data. Of course, not all data are linearly separable, so we try to find with different neural networks that can create much more complex decision boundaries. In the given figure, you can have an idea how linear and logistic regression works:

In the case of logistic regression, it tries to find a hyperplane in the input space (one of the best explanations provided by Paul sir), logistic regression in the neural networks makes a clear-cut demarcation between what is True and what is not True. (for example: in the case of a spam mail; it defines whether the mail is spam or not).

Here, we are doing this calculation for factors like coefficients and bias for a given linear transformation that gives us the minimal cost. Thus, we can express the entire equation including the coefficient and bias for a logistic regression as w.T\dot x + b = 0.

But, when the data is not linearly separable, logistic regression doesn’t work appropriately. In that case, we try doing polynomial expansion of the data at the first place and then we perform the logistic regression over the expanded data. Thus, data plays a very significant role in every manner, as there is no such guarantee that after performing this process, we would get a profound hyperplane that could divide the samples in asked format.