In Course 5 Week 2 of “Sequence Models” in the videos ‘Word2Vec’ and ‘Negative Sampling’, Andrew seems to refer to the words (context) and (target) in the following manner:
(context) word = “from/input” word
(target) word = “to/output” word
ie the definition of the two words used in the videos appears to be defined by the input and output of the neural network.
In contrast, the original Word2Vec paper by Mikolov (2013, “Efficient estimation of word representations in vector space”), and every other paper and blog post I have read on Word2Vec seems to use a different definition of the words (context) and (target):
(target) word = a single central focus word
(context) word = a collection of typically more than one word(s) surrounding the (target) word that provide the ‘context’ within which the (target) word sits.
ie the two words are defined based on a single focus word sitting within a larger collection of related words.
Q: Can anyone comment on whether my assessment that the videos use a different definition of these two words (context) and (target) than that used elsewhere ?