Extracting Word Embedding Vectors — what do we do?

As was said on video

But how do you get
the word embeddings out of your trained neural nets?
As you may remember,
word embeddings are not
directly output by the training process,
they are a by-product of the process.
I’ll now explain how you can extract
word embeddings from a trained neural net.

So, do I understand right that word embedidngs is only just weights of NN? I thought it should be something like pobabilities of surrounding word, like something that you got in previous examples, no?

Word embeddings are matrix representation in a latent space (high dimensional space) or better say high dimensional vector positions of the words, so that similar words are close in that space.

Honestly, I a bit not fully understood your answer. How is it connected to my question?

And I would like to ask more, why do we need Neural Network at all? The defining of center word, when we know sum of one-hot-vectors of surrounding words looks enough easy task in first view.