In the picture above, I don’t understand why y_hat = 1 inside the ellipse or more complex shaped decision boundary and y_hat = 0 outside both decision boundaries, instead of the other way around.

Actually I saw the same question before but I still have a question.
In the case of z = w1x1+w2x2+b and considering x1=0, x2=0, z is determined by b. And becasue the shape is ellipse b should be negative. If b is positive every z will be positive. Why y_hat = 1 is inside the ellipse?

The y_hat = 1 and y_hat = 0 are just examples here, and you switch them around depending on how you choose to label y_hat.

For example, say you are trying set up a decision boundary for a circle with z = w1 * x1^2 + w2 * x2^2 with logistic regression. So you’ll need to apply the sigmoid activation function:

y_predict = sigmoid(w1 * x1^2 + w2 * x2^2)

What you’ll need next is to determine the threshold, and to make things easier in the lectures, we usually choose threshold to be 0.5.

if y_predict > 0.5, then y_hat = 0
if y_predict <= 0.5, then y_hat = 1

In that case, the model is basically predicting y_hat = 0 if: