This is not a big problem. But I found unnecessary code in [C1_W2_Lab02_Multiple_Variable_Soln].
def gradient_descent(X, y, w_in, b_in, cost_function, gradient_function, alpha, num_iters):
# number of training examples
m = len(X)
# An array to store cost J and w's at each iteration primarily for graphing later
J_history = []
w = copy.deepcopy(w_in) #avoid modifying global w within function
b = b_in
We don’t need to deep-copy w_in above. w_in has just immutable items, float.
Actually, there is no need even to shallow-copy it. This function does not mutate w_in at all.
It’s enough to make its alias like below.
Maybe. I am not sure because gradient clipping is not covered by MLS to which this post belongs, so I do not know which implementation you were referring to.
I was referring to Dinosaurus_Island_Character_level_language_model.ipynb which doesn’t need deepcopy as well. This old post from jnsp was suggested as I attempted editing a new one. I should have created a specific topic not to confuse readers about a dictionary oversomething usage nonexistent in Lab02.
It’s alright! With the assignment name, I can find it in DLS C5 W1 A2.
For that one, we do not need a copy for optimize. However, we do need a copy for clip_test as it needs both the original values and clipped values, otherwise, the test probably is not going to be useful. There is alternative to keep clip_test useful while leaving deepcopy out of clip, but that does not really matter, because assignment is one thing, how you build your own code is another . The key is you know and we know what is going on.