I’m developing an AI application for conversational brain-teaser games and have run into a challenge with maintaining context.
Here’s the setup:
- The AI hosts a series of brain-teaser games. For each new conversation, the model receives a system prompt containing all the background information, and an example dialogue.
- The user’s goal is to solve the game through a multi-turn conversation with the AI.
The main problem is that as the conversation gets longer, the model tends to drift from the established game context. This happens most often when the user’s responses don’t align with the provided example dialogue. The model then starts to introduce inconsistent information, which breaks the game logic and prevents the user from solving the puzzle.
These games are open-ended by design—the goal is to reward creative thinking, not to find a single correct answer. I’ve instructed the model to adapt to the user’s input while staying logically consistent with the game’s contexte, but it still struggles.
Does anyone have strategies or techniques to help an LLM reliably “remember” and adhere to a fixed context throughout a conversation, even when faced with unexpected user input?
Any advice would be greatly appreciated.
Many thanks!