Word embeddings for machine translation

Hello, I am looking at presentation “Setup for Machine Translation” from week 1. It says that in real life scenarios, one usually uses embeddings to represent words, instead of a one hot encoding.

So, that means that one uses something like word2vec to represent each of the words in both English and French? Which kind of word embeddings are currently used? I guess word2vec is outdated or is that one still used?

Hi @Mauricio_Toro

You’re correct that word2vec embedding is outdated. Later in the Specialization you will learn context dependent embeddings which are pretty much today’s standard.

Cheers

1 Like