Welcome to the community.
If Andrew’s title includes ‘intuition’, it scares me…
Anyway, try to fill the gap.
Here is an overview of the network.
The focus of the equation marked in blue is dz^{[l-1]} in the left neuron given dz^{[l]} in the right neuron. And, we always need to be aware that everything comes from a loss function (cost function) at the right most portion and “back-propagated”.
Let’s start from dz^{[l-1]}. As you know, dz is a shortened form of \frac{\partial\mathcal{L}}{\partial z}. At first, using a chain rule, we separate this into two partial derivatives.
To calculate the first term, we also use a chain rule in here. And, we also need to be aware that input to this neuron comes from multiple neurons with a different weight for each like this.
In here, z^{[l]} can be written as follows.
Now, we are ready to calculate \frac{\partial \mathcal{L}}{\partial a^{[l-1]}} with a chain rule.
We can also rewrite this by using a “dot product”. But, to use a dot product, we need to transpose either. As we want to keep dz in an original form, let’s transpose w.
Now, the last equation can be re-written as follows.
Then, let’s start the 2nd term, \frac{\partial a^{[l-1]}}{\partial z^{[l-1]}}, which is relatively simple.
As we have a simple equation of a^{[l-1]} = g^{[l-1]}(z^{[l-1]}), it can be calculated as follows.
Now, we can put all together…
Then, you just need to put l=2. Now, you get.
Hope this helps.