How is the value Embedding Matrix (E) calculated?

I was bit confused with the Embedding Matrix (E), How we got the value E matrix. i.e. for example,

one-hot vector of a word is multiplied with Embedding Matrix E, we got embedding e of that word.

How we get the value of E?

Similarly, in the negative sampling we get the embedding of the word by multiplying one-hot vector with Embedding Matrix (E)

Forgive me, if it was stupid doubt, please help me to understand this. Thanks in advance!

The embedding matrix is just given to us as an input for the process in that slide. Needless to say, creating that embedding matrix was a very expensive training process that someone else took care of for us, but we can just load their trained result and use it. There are a number of pretrained “word embedding” systems e.g. Word2Vec and GloVe. The first 9 lectures in Week 2 of Course 5 discuss word embeddings, how they are trained and how to use them. If you missed that, you can go back and review that material.

If I am missing your point in my response above, please give us a more specific reference to which lecture (and the time offset) you are asking about.

I took the question to mean “How is the E matrix created”?

If that is the question, then it is created by training a word embedding model. Prof Ng covers several techniques for doing that in the lectures in Week 2.

I got it :smile:. Thank you!!!