Is this course relatable to the ideas of sequence models specialization?

Hi everybody,
I have started Natural Language Processing in TensorFlow, and I just wanted to know
about how the below Ideas in sequence models specialization(another course) are getting implemented in Tensorflow in this course. someone, please help What is the scope of this course

  • embedding parameter matrix dot product with one hot representation of words
  • Ideas of word2vec
  • gradient clipping in Rnns , Vanishing gradients
  • Beam search , attention models
  • Transformers

Natural Language Processing in TensorFlow course targets building tensorflow models. There is more emphasis on applying tensorflow and doesn’t go into much theory. Having background in the topics you’ve mentioned helps.

  1. embedding parameter matrix dot product with one hot representation of words
  • We train an embedding layer from scratch or initialize weights via transfer learning. There’s no manual dot product required since embedding layer can translate token ids to the embedding vectors. We don’t do dot products for embedding layer in this course.
  1. Ideas of word2vec
  • We don’t perform skip gram and CBOW here. Focus is on training embedding layer weights to fit the given corpus. No statistical techniques are covered.
  1. gradient clipping in Rnns , Vanishing gradients
  • Not covered explicitly. Knowing this will help you pick reasonable values for the number of units of RNN. We don’t build very deep models in this course.
  1. Beam search , attention models
  • There’s a text generation task using next word prediction. There’s no need for BEAM search for this exercise though. Attention models aren’t covered in this specialization.
  1. Transformers
  • Not covered.
1 Like

thanks