How to deal with Hallucination in L5 Process Inputs: Chaining Prompts

When going off script to ask for a product not in the inventory, ChatGPT came back with a very credible answer. So convincing you may want to click “Buy”.

Q: what is the best practice to keep LLM going out of bounds (misquoting prices)?

LLM_L5_iPhoneSE

ChatGPT is prone to inventing concepts, since is just a language model and not a truth-telling machine.

There is lots of work happening in the industry right now to attempt to address this issue.

I think the llm always try to give answer whatever it was If not instructed to do not answer If the response doesn’t meet some criteria. I think the prompt should be modified to be sure it don’t answer about product that are not in the allowed products list