Attention mask and token id

The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input’s attention_mask to obtain reliable results. Setting pad_token_id to eos_token_id:0 for open-end generation.

Whenever I am running the inference, I am getting above suggestions. Can anyone explain what it is and how to implement them?