Hello,
Requirement: π[π] = π[π] βπΌ ππ[π]
Assumption: dW[l] could be procured from L_model_backward(AL, Y, caches)
Issue:

I am not sure how to procure variables AL, Y, caches inside update_parameters().

Theoretical implementation:
parameters[βWβ + str(l+1)] = parameters[βWβ + str(l+1)]  learning_rate * L_model_backward(parameters[βALβ], parameters[βYβ], parameters[βcachesβ])[βdWβ]
, where grades[βdWβ] is taken from L_model_backward(parameters[βALβ], parameters[βYβ], parameters[βcachesβ])
Is above way acceptable in Python.?
I have referred to prior discussions, and am not able to get to conclusion.
Sincerely,
A
Hello A.
If you notice, a grads
dictionary is already given to you:
def update_parameters(params, grads, learning_rate):
.
.
.
grads  python dictionary containing your gradients, output of L_model_backward
You just need to grab the dW
and db
from it same as you are grabbing the W
and b
from a dictionary parameters.
Thanks Saif for clarifying.
For learning Python perspective, is there any other .py file (or function) in BackEnd of this course which is calling βgradsβ parameter from L_model_backward.
I am trying to understand how βgradsβ is retrieved.
Sincerely,
A
If I want to retrieve the W from a dictionary named βparametersβ, correct way is parameters[βWβ]. You can do the same for any dictionaryβ¦