Positional Encoding formula in Transformer

Here is the screenshot for positional encoding in Transformer and here is one called PE(pos, 2i) and another called PE(pos, 2i+1), per the lecture seems for odd dimension for the positional vector the formula PE(pos, 2i) is applied and PE(pos, 2i+1) is applied at even dimension, but i here also denote the dimension, is the i in PE formula is the same i for the dimension? Seems hard to map the i in formula to it in the dimension.(i.e. 2i/2i+1 to i for dimension)

Please open the assignment for week 4 and notice the 1st exercise. The markdown shows all the details you’re looking for. You’ll even code positional_encodings function which should make things clear.