Why do we consider $$z = \vec{w}.\vec{x} +b$$ when implement classification model using a sigmoid function?

In the video titled “Logistic regression” at the timestamp 4:28, Andrew mentioned the following:

$$z = \vec{w}.\vec{x} +b$$

Why is that?

It’s the simplest possible linear model.

And, logistic regression simply takes linear regression, and applies an activation function that limits the outputs to the range of 0 (False) to 1 (True).

1 Like

Got it! Thanks @TMosh :sparkles: