dA_prev Assignment 1 week 4

Hi, I was just wondering if the dA_prev variable is being passed into sigmoid_backward() and relu_backward() as dA?

I’m not sure I understand the question. The functions sigmoid_backward and relu_backward are called from linear_activation_backward, right? That function takes an argument dA. That is the dA argument that you pass to sigmoid_backward and relu_backward. If your question is what those functions do, they are provided in an “import” file. You can examine the code by clicking “File → Open” from the notebook and then opening the appropriate utility python file. There is a topic about that on the FAQ Thread, q.v.

In the linear_activation_backward() function, we return dA_prev. I was wondering in which function we pass inn dA_prev as an argument?

linear_activation_backward, right? It is the back propagation analog to what happens in forward propagation: the input to linear_activation_forward at layer l is A^{[l-1]} and the output is A^{[l]}. So in back propagation, two things are different:

  1. You’re going backward.
  2. What you’re passing is the gradient, not the underlying value. So the input to linear_activation_backward at layer l of back prop is dA^{[l]} and it produces dA^{[l - 1]}, which you then back propagate to the next layer. Rinse and repeat. :nerd_face:

Of course the real things you care about from linear_activation_backward at layer l are dW^{[l]} and db^{[l]}, but it is dA^{[l]} that drives the computation of those and is the mechanism for propagation across the layers.

1 Like