Word2Vec: Confusion over the specific definition of 'context' and 'target' words

In Course 5 Week 2 of “Sequence Models” in the videos ‘Word2Vec’ and ‘Negative Sampling’, Andrew seems to refer to the words (context) and (target) in the following manner:

(context) word = “from/input” word
(target) word = “to/output” word

ie the definition of the two words used in the videos appears to be defined by the input and output of the neural network.

In contrast, the original Word2Vec paper by Mikolov (2013, “Efficient estimation of word representations in vector space”), and every other paper and blog post I have read on Word2Vec seems to use a different definition of the words (context) and (target):

(target) word = a single central focus word
(context) word = a collection of typically more than one word(s) surrounding the (target) word that provide the ‘context’ within which the (target) word sits.

ie the two words are defined based on a single focus word sitting within a larger collection of related words.

Q: Can anyone comment on whether my assessment that the videos use a different definition of these two words (context) and (target) than that used elsewhere ?

1 Like

Yes, I think you are correct.

Thanks for your reply TMosh.

FYI, I found the following two blog posts by Eric Kim to be incredibly helpful in developing a better understanding of Word2Vec and Negative Sampling:

Just have to keep in mind the apparently different definition and usage of (context) and (target) words compared to that described in the videos here.