
In the above image, Can i know what is the Θt, is it the embedding vector (300x1) of the Target word.
Like for context orange, the target word is juice, so is Θt the embedding vector of Juice?

In the above image, Can i know what is the Θt, is it the embedding vector (300x1) of the Target word.
Like for context orange, the target word is juice, so is Θt the embedding vector of Juice?
Θt is the output layer learning weights. In the case, the output layer is
Dense(10000, activation='softmax')
so, ec is previous layer output, which is embedding vector of context word, and Θt is Dense layer weights.