It doesn't need 'deep copy' or even 'shallow copy' of `w_in`

This is not a big problem. But I found unnecessary code in [C1_W2_Lab02_Multiple_Variable_Soln].

def gradient_descent(X, y, w_in, b_in, cost_function, gradient_function, alpha, num_iters): 
    # number of training examples
    m = len(X)
    # An array to store cost J and w's at each iteration primarily for graphing later
    J_history = []
    w = copy.deepcopy(w_in)  #avoid modifying global w within function
    b = b_in

We don’t need to deep-copy w_in above. w_in has just immutable items, float.
Actually, there is no need even to shallow-copy it. This function does not mutate w_in at all.
It’s enough to make its alias like below.

    w = w_in
    b = b_in

Thank you Jangsea. It is not needed. And welcome to our community!


1 Like

Thanks for welcoming!


1 Like

Hi all,

Nor is it necessary to explicitly load dictionary values into temporary variables such as: dWaa, dWax, dWya, db, dby and then recreate the dictionary.

def clip2(gradients, maxValue):

for gradient in gradients:
np.clip(gradients[gradient], xxx, xxx, out = xxx])

return gradients

I noticed this revising lessons. I imagine they did this for some pedagogical purposes, it is not a python specialization after all.