Cost function returning different answers

So in Optional Lab: Multiple Variable Linear Regression (Lab02, week02)

There’s this cost function implemented for a given dataset. Accordingly when I execute the kernel cell i get:

Cost at optimal w : 1.5578904045996674e-12

Whereas when i just create a python file and write the same function with the same initial and training values the output I am getting is this:

Cost at optimal w : 2.0487331486181772

Not sure what’s wrong.

This is the code that I wrote

x_train = np.array([[2104, 5, 1, 45], [1416, 3, 2, 40], [852, 2, 1, 35]])
y_train = np.array([460, 232, 178])
m = x_train.shape[0]
n = x_train.shape[1]

b_init = 785.1811367994083
w_init = np.array([ 0.39, 18.75, -53.36, -26.42])


def compute_cost(X, y, w, b):
  m = X.shape[0]
  cost = 0.0
  for i in range(m):                                
    f_wb_i = np.dot(X[i], w) + b
    cost = cost + (f_wb_i - y[i])**2     
  cost = cost / (2 * m)
  return cost

cost = compute_cost(x_train, y_train, w_init, b_init)
print(f'Cost at optimal w : {cost}')

What would happen if you use the same set of w_init as the lab?

Long story short, found my mistake.

The issue was when I wrote the logic myself I had made a mistake and then figured the issue and fixed it but forgot to initialize w with the entire precision.

And also in the kernel i just printed w_init and it was rounded of to 2 decimal precision so I had taken that value to see if it would change any.

But ya , got the issue exactly and thanks for your time once again Raymond :saluting_face: :smiling_face:

No problem, @Tarun_Kumar_S. The rounding off contributes rougly 1% of error which can be translate to about ~2 when the first feature of X multiply to its weight, and that ~2 is in the same order of magnitude of the difference of your two costs, so that’s how I spotted it. Hope that will be useful for you in the future :wink:

Cheers,
Raymond

1 Like