The Probabilistic Language Model has dense layers preceding the softmax layer. Such dense layers are not shown in the picture that Andrew drew for word2vec:
Is that on purpose or there are really dense layers in word2vec as well?
The Probabilistic Language Model has dense layers preceding the softmax layer. Such dense layers are not shown in the picture that Andrew drew for word2vec:
Is that on purpose or there are really dense layers in word2vec as well?
Yes, there is an implied dense layer there, that’s where the theta values come from.
In the Probabilistic Language Model, there is more than one. Why would it not be the case in Word2Vec?
Perhaps there is.
Yes, I’ll ask another mentor to handle your question.
Hey @Meir,
In Word2Vec paper they presented two model architectures: CBOW model and continuous skip-gram model. They both have a single hidden layer that aim to learn word representations.
In general, you are free to use more hidden layers. As far as I understood they decided on a single layer just for the sake of efficiency.
Just to clarify the video example,
is equal to
Where
Hi @manifest, this is very helpful. Can you clarify some more?
How does the 300-element feature vector become the 10,000-element vector of probabilities? Are the two trainable weights matrices E different from each other?
I feel that this has something to do with \theta_{t}, but I haven’t quite grasped it.
Hey @CharmingQuark,
You are right. There are actually two models
I guess in the lecture, they just didn’t want to complicate things.
Given a vocabulary of size 10000, m being a batch size and T_{x} being the sequence size, we have the following shapes for the input one-hot vector o_{c} and the weights matrix E:
o_{c} \in \mathbb{R}^{m \times T_{x} \times 10000}
E \in \mathbb{R}^{300 \times T_{x}}
The dot product of these matrices, e_c = E o_{c}, will be of the following shape:
In the lecture, \textrm{softmax} is not just a function, but a layer meaning that it includes a linear (or dense) layer with trainable weights \theta_t. It is of the following shape:
The dot product of these matrices, \hat{y} = \theta_{t}e_c, gives us the 10000-element vector of probabilities:
So, there are actually two models stacked together:
e_{c} = Eo_{c} + b_{1}, that is basicaly the model we call word2vec.
\hat{y} = \textrm{softmax}(\theta_{t}e_{c} + b_{2}), a language model that we use to train weights of the word2vec model.