Langchain RetrievalQA Chain: Challenges with Context Adaptability

Hi Langchain Community,

I’ve been experimenting with a dialog system using the retrievalQA chain, as described in this module. I provide the chat history as input to the system. It performs well when subsequent queries are closely related to the initial query. However, when there’s a significant change in context, such as a query like “hello” after a detailed question, the system struggles. My understanding is that the chain retrieves the most similar vectors for the initial query and then continues to search within these vectors for subsequent queries. This behavior seems to limit the system’s ability to adapt to changing contexts.

Expected Behavior: The system should be able to handle diverse subsequent queries without being overly constrained by the initial query’s context.
Actual Behavior: The system performs well for closely related subsequent queries but struggles when there’s a significant change in context, unless the chat history is cleared.

Has anyone else encountered this issue or found a workaround? I’d appreciate any insights or suggestions. Thanks in advance!