Embedding Layer Transfer Learning

when we were copying the weights from the GLOVE_EMBEDDING dictionary to the EMBEDDING_MATRIX, we added 1 column at the most left.

in the assignment description, they said it’s because the word_index starts with 1 (0 for padding)

but I didn’t quite understand why is that and why we added 1 to the vocab_size and what is the relation between the word_index starting at 1 and the vocab size.

can someone please explain this in more details?

Please see this link.
Do pay attention to this text: 0 is a reserved index that won't be assigned to any word.