Word embeddings for words with possibly different meanings

Hello everyone, in the deep learning specialization (course 5 week 2), we use GloVe word embeddings for the emojifier. In a general way, loading word embeddings is a form of transfer learning that help deep learning models perform various tasks in NLP. I was wondering about the issue of words with potentially different meanings or nuances. For example, the word “glass” may refer to a pair of glasses, to a liquid container for drinks, to the material itself, etc. “Spring” may refer to the act of jumping, to a season, to a place with water, to an elastic device, etc. In that regard, isn’t that restrictive to assign a single vector to a single word? My intuition is that somehow, allowing different possible vectors/embeddings for each word (while adapting subsequent frameworks such as neural networks, in consequence) could somehow improve performance on NLP tasks. Is there any research pointing towards that direction? What are your thoughts on the topic? I am not an NLP specialist but I am curious.

Hi @green_sunset,

As the course progresses, you’ll study models and techniques which take into account this ambiguity.