What is going on with the poor quality of week 4 lectures?

Week 4 lectures are laughably vague and terrible. I think I spent the last 2 days going over the LSH attention lab and it was probably the most complex thing in the NLP specialization. However I think you guys decided to spend more time talking about the mechanics of naïve Bayes and LR than transformers in the lectures themselves. The links to hashing from previous lectures are unhelpful because the LSH component wrt 3 dimensional arrays are much more complex and require additional functions to work in Trax. For example I still really do not get how the tie_in function works.

Andrew in the deep learning specialization I think went over mechanics of the algorithm directly in the lectures which I appreciated. The lectures this week seems lazy and rushed. Very much disappointed.

In another comparison to the deep learning specialization, Andrew devoted an entire course on best practice for creating and training models. Discussing issues with bias and variance, how best to approach real work problems. That’s sorely missed with Transformers especially since it seems that most people use transfer learning on established models. More details could have been said concerning one shot vs few shot learning approaches. Hugging Face provides excellent documentation but I would have expected the course to dwell on this more.

I was excited about this course in the beginning but now I feel sad and disappointed :frowning_face: . At least it is a start.

Hi @Jose_James,

Thank you for your feedback.

We shall use it to improve our content in the future.

Best,
Mubsi

Hey @Jose_James,
Thanks a lot for your valuable feedback. Just to help you out regarding the below point, please check out the following thread. I hope that your below query will be resolved after taking a look at the aforementioned thread.

Additionally, if you have any queries regarding any specific points in Week 4 Labs, please do let us know, we will help you out as much as we can.

Cheers,
Elemento