Understanding about Transforming word vector model

I still don’t really understand why we need to build a “Transforming word vector model”. Why don’t we build a dictionary and search by key-value. In the video, the instructor explains that we can build a transformation matrix based on a small set of vocabulary, but that still doesn’t explain the problem. Using transformation matrix is still more complicated than searching as key-value. I’m thinking, maybe this model is just academic to build the foundation for more complex models, is that true?

Searching is an extremely slow process, especially if the vocabulary is very large.

If we use the Transformation matrix we still have to search after we have the embedding vector of the translated word. Moreover, the search is not too slow if other support measures such as hash tables can be used.

Which lecture are you referring to here? You filed this under DLS Course 4, but this doesn’t look familiar from DLS C4. Are you sure this isn’t something from Course 5?

1 Like