Chatgpt model doesn't follow instructions at all times

What is the best way to ensure instructions mentioned in the prompt are always followed and model doesn’t hallucinate? [Answers always from the given context. ]

I don’t think that’s possible with current chat technology. ChatGPT is just a language model - it’s not a truth machine.

1 Like

Treat it like a naughty kid. Discipline (prompt to check) is necessary.

1 Like

I have also noticed that ChatGPT is not idempotent in some cases. I rewrote commands to be more specific for such cases. But it does not look stable, and something could be changed after update on OpenAI side.