How to stop the LLM from losing context in a multi-turn game conversation?

I’m developing an AI application for conversational brain-teaser games and have run into a challenge with maintaining context.

Here’s the setup:

  • The AI hosts a series of brain-teaser games. For each new conversation, the model receives a system prompt containing all the background information, and an example dialogue.
  • The user’s goal is to solve the game through a multi-turn conversation with the AI.

The main problem is that as the conversation gets longer, the model tends to drift from the established game context. This happens most often when the user’s responses don’t align with the provided example dialogue. The model then starts to introduce inconsistent information, which breaks the game logic and prevents the user from solving the puzzle.

These games are open-ended by design—the goal is to reward creative thinking, not to find a single correct answer. I’ve instructed the model to adapt to the user’s input while staying logically consistent with the game’s contexte, but it still struggles.

Does anyone have strategies or techniques to help an LLM reliably “remember” and adhere to a fixed context throughout a conversation, even when faced with unexpected user input?

Any advice would be greatly appreciated.

Many thanks!

Here are some approaches you can take:

  • Include a “context summary” with core rules/facts in every prompt
  • Use structured state tracking (JSON format works well)
  • Add explicit consistency checking instructions
  • Regularly reinforce the AI’s role and constraints
  • Handle unexpected input by acknowledging creativity while redirecting to established facts

The main approach is layering multiple techniques rather than relying on one method.

Also, take a look at the articles below for more details on implementation:

Thank you for the detailed explanation and advice.