# A doubt in C1_W3_Lab06 Gradient descent for logistic regression

To find the decision boundary, we set z = 0

Let w = w_out, b = b_out (by following the code in this lab)

z = w_out[0]*x0 + w_out[1]*x1 + b_out

Let z = 0
w_out[0]*x0 + w_out[1]*x1 + b_out = 0

So the x0 and x1 should be like this:
x0 = -(w_out[1]*x1 + b_out)/w_out[0]
x1 = -(w_out[0]*x0 + b_out)/w_out[1]

However, this model answer shown in this lab is different

Thank you.

Hi @JJaassoonn ,

w_out and b_out are the optimal values found after training, ie, outputs from calling gradient_decent(), what is your reason for using these values to calculate z?

Dear Mr Kin Cheung,

The lab materials show that i have to use the formulas given to plot the decision boundary after getting the optimal values of w_out and b_out.

I have a doubt on getting the formulas given by the lab material.

x0 = -b_out/w_out[0]
x1 = -b_out/w_out[1]

These formulas look strange to me because it is totally different from what i have learnt in the lecture slide.

Thank you.

Hello @JJaassoonn

Those two equations are for finding the two intercept points so that it knows how to draw the line of boundary.

I will leave it to you to think about why those two equations can get you the intercept points’ locations.

Cheers,
Raymond

Dear Mr Raymond Kwok,

Thank you so much for your guidance.

No problem!

Cheers,
Raymond