Week 3 Lab 6: Decision Boundary for Logistic Regression

I am unable to understand the following code snippet to plot the decision boundary in the gradient descent for logistic regression exercise:

Plot the decision boundary

x0 = -b_out/w_out[0]
x1 = -b_out/w_out[1]
ax.plot([0,x0],[x1,0], c=dlc[“dlblue”], lw=1)
plt.show()

I assume the function f(x) = w0x0 + w1x1 + b
Considering this, how are we equating x0 = -b_out/w_out[0]

Can someone please explain?
Thanks!

Hi @ankitprashnani,

Yes, the model is f(x) = w0x0 + w1x1 + b. Here x0 and x1 are the two features.

However, for x0 = -b_out/w_out[0], the x0 here represents an intercept value, which is the value of x0 when x1 = 0. And it is the same for x1 = -b_out/w_out[1]. If you prefer, we can actually change the two lines of code into

x0_intercept = -b_out/w_out[0] # setting x1 = 0
x1_intercept = -b_out/w_out[1] # setting x0 = 0

Also, the equal signs here do not mean “equal to”, they mean “assigned to”, so instead of saying

how are we equating x0 = -b_out/w_out[0]

it should be, we are assigning the value of -b_out/w_out[0] into a variable called x0.

Why do we need to calculate the two intercept values? Because we need two points to plot the straight boundary line.

Cheers,
Raymond

1 Like