My question is exactly the same as discussed in this thread.

In the DLS Course 1 week 3 and 4 video lessons or programming assignments, the differential equation of backpropagation is described as follows.

dZ[L] = A[L] - Y

dW[i] = (1/m) dZ[i] A[i-1].T

db[i] = (1/m) np.sum(dZ[i], axis=1, keepdims=True) (1<=i<=L)

Specifically, they appear in the following situations:

・All scenes dealing with backpropagation Vectrization in the lecture videos of the 3rd and 4th weeks

It first appears in the blackboard around 7:30 in the video of https://www.coursera.org/learn/neural-networks-deep-learning/lecture/Wh8NI/gradient-descent-for-neural-networks

・clarification-for-what-does-this-have-to-do-with-the-brain document for modifying the typo in the video of this section

・Exercise-6 backward_propagation cell in Week3 programming assignment (when L=2)

・Some people say that the same formulas will appear in the Week 4 assignments (I haven’t done the Week 4 assignments yet, so I don’t know)

But I suspect that this backpropagation differential equation is a typo.

Correctly,

dZ[L] = (1/m) (A[L] - Y)

dW[i] = dZ[i] A[i-1].T

db[i] = np.sum(dZ[i], axis=1, keepdims=True) (1<=i<=L).

It seems that typographical errors have occurred in all the scenes I mentioned above, so please confirm.

In particular, some of the people who did the Week 4 assignments claim that points will be deducted for writing logically correct formulas, so I would appreciate it if you could confirm it as soon as possible.