Sigmoid function & Decision Boundary

I understand that the logistic regression mode is represented by the sigmoid function, and decision boundary shows the boundary below which the feature is classified into one class and above which it is classified into another class. My question is: which of them, the sigmoid function and decision boundary is used in making predictions? And, if the sigmoid is used, results are from zero to one, not zero exactly or one exactly, so how to make them discrete zeros or ones?

Sigmoid is used for predictions. If the value is >= 0.5, it’s a 1.
The implementation method is a logical comparison.

The boundary is here:

then, what is the purpose of decision boundary?

0.5 is the decision boundary. It’s what decides on 0 or 1.

The decision boundary is wx + b, so it is the orange line as said in the lab. We get it from equating z to zero most of the times. That’s what I understood.

Yes, I agree. The line in that plot is where the predicted y values are exactly 0.5. Notice that plot has two features on the 2D axes (x0 and x1).

The earlier plot has only one feature, and the vertical axis has the y values. That means the boundaries are drawn differently.

That also works, since if z = 0, it’s the same as sigmoid(z) = 0.5.