MLS - Course 2 - Neural Networks - Backpropagation discussion

Hi,

The MLS Course 2 (Advanced Algorithms) covers Neural Networks only to the extent of forward propagation. I was surprised to see that there was no discussion on back propagation at all. The old Machine Learning course covered back propagation in fantastic detail and I was looking forward to learn the nuts and bolts of back propagation in detail in the MLS course.

Any reason why back prop was completely avoided in the new MLS course?

In general, the MLS course seems to be a ‘downgraded’ version of the original course (no offence) . I still love how superbly Prof Andrew Ng explains complicated concepts - I was really hoping that a “Specialization” course on Machine learning would be an ‘upgrade’ to the original course, not a downgrade!!

3 Likes

Hi!

See this thread on Backpropagation C2 W2 backpropagation with simple math [TEACHING STAFF]

2 Likes

Hello @vishnubhatla,

In the new MLS course 1, Professor Ng covered gradient descent which is backward propagation, though not in the context of neural network. Also, if you are looking for more, please visit the first course of Professor Ng’s deep learning specialization.

Have fun!
Raymond

2 Likes

Hi @vishnubhatla , I’m still working with Andrew and Geoff on a version of backprop that will be suitable for our target learner. We had recorded a version of it but decided as a team that it needed more redesign. It will be different from the original in that we’ll avoid matrix multiplication and transpose, and also discuss the computation graph and hopefully the chain rule in order to focus on the visual intuition.

You can take a look at the lectures that Sam referred to as a preview, as we’ll incorporate some of this approach in the backprop lectures when we film it. We’ll work on this after we launch course 3.

Also, you can take a look at these lectures on the derivative of logistic loss. how to get the derivatives of the logistic cost / loss function [TEACHING STAFF].

If you get to the last video, you’ll see how the chain rule lets you reuse a lot of computations of derivatives without needing to re-calculate them, which is really important for efficiency. Backprop is just the application of the chain rule to neural networks (thanks to Geoffrey Hinton’s research), and it’s what makes efficient training of neural networks possible.

Thanks, -Eddy

2 Likes

Note: I have updated the title of this thread to include “Backpropagation”, so students may find the topic more easily.

2 Likes

That’s reassuring, thanks Eddy! Will the new lectures on backprop (once they’re ready) be posted as part of Course 2 of the ML specialisation, or as a separate series?

2 Likes

Hi @RushIbrahim ! Yes, any of the additional videos and labs that we’re working on going forward we’ll be putting into the classroom. :slight_smile:

3 Likes

Awesome, thanks @eddy. I was just observing on another discussion thread that backpropagation has been completely ignored… I’m really glad that your team is working on creating a fresh video on this. This is a relatively complex topic, and it would be very helpful if we can get an intuitive understanding of how backprop works. Looking forward to the video.

Is there a way to get notified when this new video gets added to Course 2? Thanks!

1 Like

Yes @Prax , I’m pretty sure that once we add the additional content we will notify all the learners by email.

Normally whenever we build courses we have to decide whether to get everything we want into the course at launch vs. getting the core content to the community as soon as possible.
Another common issue is deciding whether to release content that we don’t quite feel is the best that we could do, vs. leaving it out until we think it’s in a final state.

I think we will get all the additional optional content into the course before the end of the year. Probably sooner. :crossed_fingers:

Super, thanks @eddy! Appreciate all the great work being done by you and your team, behind the scenes :slight_smile:

1 Like