C1_W2_Linear_Regression: vectorised approach

Hi,

I’ve submitted the C1_W2_Linear_Regression lab and would like to share with you some thoughts about the approach I’ve taken.

As we learnt in weeks 1 and 2, the linear regression equation can be calculated with a for loop as:

m = x.shape[0]

for i in range(m):
    f_wb[i] = w * x[i] + b

Or, more efficiently, with the NumPy vectorised function np.dot as:

f_wb = np.dot(x, w) + b

The cost function or mean squared error (MSE) is calculated as:

total_cost = 0

for i in range(m):
    f_wb = np.dot(x, w) + b
    cost += (f_wb - y[i])**2
total_cost = cost / (2 * m)

By using another vectorised function, np.sum, we no longer need to calculate the summation in a for loop. Therefore, it’s more efficient, easier to write and read.

Lastly, the gradient descent is calculated as:

dj_dw = 0
dj_db = 0
m, n = x.shape

for i in range(m):
    err = (np.dot(x[i], w) + b) - y[i]
    for j in range(n):
        dj_dw[j] += err * x[i, j]
    dj_db += err
dj_dw = dj_dw / m
dj_db = dj_db / m

Let’s see how we can get rid of each for loop:

  1. The for loop of the gradient of b:

    for i in range(m):
        dj_db += err
    

    is replaced by the vectorised sum, as mentioned previously.

  2. The nested for loop of gradient of w:

    for i in range(m):
        for j in range(n):
            dj_dw[j] += err * X[i, j]
    

    is replaced by the dot product of the transpose of \mathbf{x} and the error \epsilon.

By following these steps, we calculate the gradients in just 4 variables using vectorised functions.

I’ve done more than the exercise asked me for but I’ve learnt a lot of new things along the way.

1 Like

you are sharing an older version of the same assignment when you tried avoiding the for loop, both surely yields the same results.

Great work, @vgrz! The vectorized approach has better clarity and thus, to me at least, is easier to debug. :raised_hands: :raised_hands: :raised_hands: Though future assignments of this MLS will still use many for loops, if you will ever do the more advanced Deep Learning Specialization, you are already more prepared for it!

Onwards!

Raymond

1 Like

I’d like to point out that’s the code from the C1_W2_Lab02_Multiple_Variable_Soln notebook.

1 Like

it’s good a practice to always work around the codes we work upon, gives more understanding to concepts and clarity…

1 Like

Thank you, Raymond.

I decided to take this approach and document it here as an additional learning experience.

I requested financial aid on Coursera to do this specialisation and it was granted to me. Depending on how things go, I’ll do the same for the Deep Learning one.

1 Like

That’s good! I am looking forward to your next thread.

Cheers,
Raymond