Expanding on appending to the context in the chatbot example - what are efficient ways to implement a memory for the LLM to remember interactions over time?
Is this where a vector database could be used?
Thanks!
Expanding on appending to the context in the chatbot example - what are efficient ways to implement a memory for the LLM to remember interactions over time?
Is this where a vector database could be used?
Thanks!
Hi @Buddhima , yes, you can use vector databases. There are many available, just do a search with ‘embeddings database’.
Alternatively, and for small projects, you can use local files with embeddings and work it as an embedding database.
Embeddings can work as memory for certain tasks in LLM. It will depend very much on your use case.