How to resolve the non-padding error msg when using Llama2 model with HuggingFaceEmbeddings in langchain?

I am trying to use the Llama2 model with the embedding module Langchain, and have imported the Llama2 model using the HuggingfaceEmbedding API.
However, I am getting an error message that says “Asking to pad but the tokenizer does not have a padding token.” for the tokenizer. Here is the code that I have run:

hf = HuggingFaceEmbeddings(
model_name=“meta-llama/Llama-2-7b-hf”,
model_kwargs={‘device’: ‘cuda’},
encode_kwargs={‘normalize_embeddings’: False}
)
vectordb = Chroma.from_documents(
documents=splits,
embedding=hf,
persist_directory=persist_directory
)

ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as pad_token (tokenizer.pad_token = tokenizer.eos_token e.g.) or add a new pad token via tokenizer.add_special_tokens({'pad_token': '[PAD]'}).