Ways to implement LLM memory

Expanding on appending to the context in the chatbot example - what are efficient ways to implement a memory for the LLM to remember interactions over time?

Is this where a vector database could be used?

Thanks!

1 Like

Hi @Buddhima , yes, you can use vector databases. There are many available, just do a search with ‘embeddings database’.

Alternatively, and for small projects, you can use local files with embeddings and work it as an embedding database.

Embeddings can work as memory for certain tasks in LLM. It will depend very much on your use case.

1 Like