Here is the screenshot for positional encoding in Transformer and here is one called PE(pos, 2i) and another called PE(pos, 2i+1), per the lecture seems for odd dimension for the positional vector the formula PE(pos, 2i) is applied and PE(pos, 2i+1) is applied at even dimension, but i here also denote the dimension, is the i in PE formula is the same i for the dimension? Seems hard to map the i in formula to it in the dimension.(i.e. 2i/2i+1 to i for dimension)

`positional_encodings`

function which should make things clear.