Using local Llama2 to chat with data

I am attempting to recreate some of the chat-with-your-data examples using a locally running Llama2 model. For the most part it works well. However, the openai model used in the course would better indicate when information was not contained in the data source. Using the Llama model, it seems to make stuff up. I have ensured a temp=0.0 setting but can’t figure out how to prevent this. Do I need some sort of filtering when doing my similarity search, perhaps?