W4_A1_Ex-10_Invoking_L_model_backward parameters

Hello,

Requirement: π‘Š[𝑙] = π‘Š[𝑙] βˆ’π›Ό π‘‘π‘Š[𝑙]

Assumption: dW[l] could be procured from L_model_backward(AL, Y, caches)

Issue:

  1. I am not sure how to procure variables AL, Y, caches inside update_parameters().

  2. Theoretical implementation:
    parameters[β€œW” + str(l+1)] = parameters[β€œW” + str(l+1)] - learning_rate * L_model_backward(parameters[β€œAL”], parameters[β€œY”], parameters[β€œcaches”])[β€œdW”]

, where grades[β€œdW”] is taken from L_model_backward(parameters[β€œAL”], parameters[β€œY”], parameters[β€œcaches”])
Is above way acceptable in Python.?

I have referred to prior discussions, and am not able to get to conclusion.

Sincerely,
A

Hello A.

If you notice, a grads dictionary is already given to you:

def update_parameters(params, grads, learning_rate):
.
.
.
grads -- python dictionary containing your gradients, output of L_model_backward

You just need to grab the dW and db from it same as you are grabbing the W and b from a dictionary parameters.

Thanks Saif for clarifying.

For learning Python perspective, is there any other .py file (or function) in Back-End of this course which is calling β€˜grads’ parameter from L_model_backward.

I am trying to understand how β€˜grads’ is retrieved.

Sincerely,
A

If I want to retrieve the W from a dictionary named β€˜parameters’, correct way is parameters[β€œW”]. You can do the same for any dictionary…