Week 3: Backpropagation Intuition [improvement/correction]

Hi everyone,

Not sure if this is the right place to post this, but I’d like to point out a slight incoherence in the second slide of week 3’s “Backpropagation Intuition”, it should be (if I’m not missing something)

a[1] = g(z[1])

instead of

a[1] = sigma(z[1])

You’re not missing anything major; it’s just a matter of notation based on the specific activation function used in the example. Your suggestion of using g(z[1]) as a generalized form is valid. g(z[1]) is a general notation for any activation function. In the lecture series, sometimes Prof. Andrew Ng uses this more general notation when discussing any activation function, such as ReLU, tanh, or sigmoid. In the slide, the activation function σ refers to the sigmoid function, commonly used for binary classification (e.g. logistic regression) or specific neural network architectures.

Hi, thank you for your reply. Indeed, but then in his handwritten notes he calculates dz[1] as W[2]^T dz[2] * g[1]'(z[1])

That’s why I was suggesting changing the notation in the graph. But I indeed guess it’s not worth updating the video for such a minor incoherence.

1 Like