Search Document Video

In the search document video in the example code, there are word embeddings of the words:
“I”,“love”,“learning”
How did he come up with these vectors?

Hi @gkouro

In the video they are just made up. In reality, there are many ways to create word embeddings (numerical representations of the words). Usually they are initialized randomly and updated through gradient decent to get the “best” vector values (numerical representations) for each word (/token).

Thanks. In order to use gradient descent there has to be some kind of output/label against which a cost is calculated. What kind of output would I be looking for here?

Later in the course you will learn about different techniques how to learn word embeddings in more detail. But to get you a quick idea is that you have some words and you try to predict the next one (e.g. I love ______) if you predict correct or incorrect, you update your embedding weights accordingly. You do this over many trials and eventually you “train” your embeddings.

1 Like