How to understand gradient descent code?

I am looking at the last optional lab for week 1 and having trouble understanding the written code. May I please get an explanation for the code here that is supposed to calculate the gradient:

for i in range(m):
f_wb = w * x[i] + b
dj_dw_i = (f_wb - y[i]) * x[i]
dj_db_i = f_wb - y[i]
dj_db += dj_db_i
dj_dw += dj_dw_i
dj_dw = dj_dw / m
dj_db = dj_db / m

and some code in the next step:

for i in range(num_iters):
# Calculate the gradient and update the parameters using gradient_function
dj_dw, dj_db = gradient_function(x, y, w , b)

    # Update Parameters using equation (3) above
    b = b - alpha * dj_db                            
    w = w - alpha * dj_dw                            

From my calculus 3 class I took last year I learned that gradient is when you take the derivative with respect to multiple variables, and get a vector function as a result.

In words the steps should be to first find the gradient, take the derivative again to check if it is a max or min point depending on it being greater or less than 0. And then to find the absolute maximum, these steps will be repeated for all the zeroes of the partial derivative. How does it check whether the cost is actually decreasing to allow the variable to be assigned back to itself?

The calculus has already been performed, what you have to implement is the resulting equations for the gradients.

I guess these image are self explanatory about the confusion you are thinking.

Basically here first the cost function is calculated with a minimum error prediction which then is applied with the gradient descent function. Then this gradient descent is applied on the partial derivatives with each iteration.

Hope it helps you understand!!!

Happy Learning!!!

Regards
DP