Course 1, Week 3, Backpropagation Intuition (Optional)

In a slide, the title is “Neural network gradients” , Prof Andrw Ng is explaining about one example but m examples, right?
I would like to make it clear, thank you so much

Could you provide the screenshot of this place in the lecture? Then it would help answer your question.

HI, @Koyo_Fuji.

To illustrate the functioning of a network, it is typically useful to show how a single example (or data point or “observation, if you wish)” passes through the various layers. It has the advantage of eliminating one of the dimensions in the hopes of aiding understanding, before moving on to the full m-example notation.

So yes, you are correct that there are m examples, but each takes a conceptually similar journey forward through the network (and back), at each iteration in gradient descent. Vectorizing the code to include all m examples simultaneously, speeds up the execution (to say the least!), but the single example treatment is supposed to speed up your understanding.

if you are still confused on the point, the screenshot that @Oleksandra_Sopova suggests would be helpful.

I understood, Thank you Kenb!

I already fixed this problem, thank you anyway