Retrieval with memory still doesn't work well

Thank you for the new course on “chatting with your documents”! I was struggling adding memory to retrievalQA from the first LangChain course. However, I am finding that memory still doesn’t seem to work well. I definitely see this on my own docs, but it’s in the embedded notebook, too. This conversation doesn’t really depict memory:

It also seems to lose other basic capabilities. If you ask it to remember your name, the prompt gets reformatted so that it thinks I’m asking for the chatbot’s name. (Also, I had to refresh the page to try this, because it got stuck answering about prerequisites even when I gave it my name).

Any ideas from anybody on how to understand this behavior and elicit better answers?

Just further info. I’m trying to build a chatbot that can converse about the design principles we give to developers at my workplace. I asked it how many principles there are (there are 5). It gave me 3. This makes sense, because k=3 on document retrieval. But when I asked it “explain more about the second one” (testing its memory), it failed and gave me information about a principles that was not even in the initial 3 it listed.

Hi!

I’m working on a similar thing. I’ve got the basic RetrievalQA chain working but without memory and of course, that’s where the juice is…

In case you haven’t seen it yet, check out the docs on QA in general. They provide some examples of doing retrieval with memory.

What’s frustrating here is that, at first glance, the JS and Python APIs are different. For example, in JS you have the ConversationalRetrievalQAChain, which includes memory. However, I’m not finding quite the same for Python. The Python docs make reference to ConversationalRetrievalChain (no reference to QA), which does seem to have a memory parameter, so maybe that can help?

Don’t know if anyone is reading these, but this is still basically not working. Any tips?

Hello, did you achieve something? I have the same problem