# C1_W3 Non-linear decision boundaries

In the picture above, I don’t understand why y_hat = 1 inside the ellipse or more complex shaped decision boundary and y_hat = 0 outside both decision boundaries, instead of the other way around.

Actually I saw the same question before but I still have a question.
In the case of z = w1x1+w2x2+b and considering x1=0, x2=0, z is determined by b. And becasue the shape is ellipse b should be negative. If b is positive every z will be positive. Why y_hat = 1 is inside the ellipse?

Can someone pls help out and explain, thanks?

2 Likes

Which region is 1 vs 0 depends on the labels in the training set.

You also cannot assume that w or b is always positive. They are real numbers, they can be positive or negative.

2 Likes

Thanks

1 Like

Oh, I got it. Thanks for answering!!

The `y_hat = 1` and `y_hat = 0` are just examples here, and you switch them around depending on how you choose to label `y_hat`.

For example, say you are trying set up a decision boundary for a circle with `z = w1 * x1^2 + w2 * x2^2` with logistic regression. So you’ll need to apply the sigmoid activation function:

`y_predict = sigmoid(w1 * x1^2 + w2 * x2^2)`

What you’ll need next is to determine the threshold, and to make things easier in the lectures, we usually choose threshold to be 0.5.

if `y_predict` > 0.5, then `y_hat = 0`
if `y_predict` <= 0.5, then `y_hat = 1`

In that case, the model is basically predicting `y_hat = 0` if:

`y_predict > 0.5`
`sigmoid(w1 * x1^2 + w2 * x2^2) > 0.5`
`w1 * x1^2 + w2 * x2^2 > 0.5`

And that’s the example you see on the slide above. Above, we chose `y_hat = 0` if `y_predict` > 0.5. However, you can choose the opposite:

if `y_predict` > 0.5, then `y_hat = 1`
if `y_predict` <= 0.5, then `y_hat = 0`

It’s your decision as the person designing the model.

In order for the decision boundary to look like an ellipsis, you need the squared terms, similar to the example I gave above.

You have made a very good point You are the first one here!

Oh I meant z=w1x1^2+w2x2^2+b. I wrote it wrong.

So you mean that the important point of this lecture is the decision boundary act as the baseline of prediction?

Oh, it was just a minor mistake! Thanks for answering me!

1 Like