Hello.

Soo, when I finished the theory videos of week one, I went ahead and started to do the labs, but also found other codes of the same course that I tried to do.

One of the codes I tried was the following:

ml-coursera-python-assignments/Exercise1 at master · dibgerge/ml-coursera-python-assignments (github.com). After finishing this assignment, the next code i did was lab05 of c1_w1, from the course’s optional labs.

My issue is that, after working for some time on the first exercise that I linked, and moving to lab05: there were a lot of differences. For example:

In the first exercise this is how I was asked to calculate the cost:

```
def computeCost(X, y, theta):
# initialize some useful values
m = y.size # number of training examples
# You need to return the following variables correctly
J = (1 / (2 * m)) * np.sum(np.square(X.dot(theta) - y))
return J
```

But in the lab05 i was asked to do it differently:

```
def compute_cost(x, y, w, b):
m = x.shape[0]
cost = 0
for i in range(m):
f_wb = w * x[i] + b
cost = cost + (f_wb - y[i]) ** 2
total_cost = 1 / (2 * m) * cost
return total_cost
```

These differences are not only present on these specific functions: the way gradient descent is calculated in lab5 and how is asked to be calculated in this github assignment is totally different. So the questions are:

Which is the proper way to apply and implement gradient descent code wise?

Should I use and try to implement `w`

and `b`

all the time?

I’m a little bit confused now after doing these 2 assignments