Confusing questions about positional encoding in the quiz

I think I understand positional encoding correctly. But these questions are too ambiguous.

  1. What does it mean by ‘locate’? I think positional encoding is a unique label for word position and it helps the model to ‘locate’ the word in the sentence. What’s wrong with this choic?
  2. What does ‘common encoding’ in the forth option mean? For the third option, is it correct in the sense that distance between p and p with i, j fixed is also fixed regardless of sentence length?

The questions asks to “locate every word in the sentence”. It mostly means that given positional embedding, can you find (“locate”) the position of the word in the input. Which is not true. It is a representation to take into account nearby words for context, but it can’t help to locate.

For 2, the third option is correct in your sense. But the question is which is “not” a good criterion. If the distance is consistent between time steps, it is a good criterion for a decent algo.