✨ New course! Enroll in LLMs as Operating Systems: Agent Memory

:arrow_forward: Enroll Now

About this course

Learn how to build agentic memory into your applications in this short course, LLMs as Operating Systems: Agent Memory , created in partnership with Letta, and taught by its founders Charles Packer and Sarah Wooders.

An LLM can use any information stored in its input context window but has limited space. Using a longer input context also costs more and causes slower processing. Managing this context window and what to input becomes very important.

Based on the innovative approach in the MemGPT research paper “Towards LLMs as Operating Systems,” its authors, two of whom are Charles and Sarah, proposed using an LLM agent to manage this context window, building a management system that provides applications with managed, persistent memory.

2 Likes

Great course. One quick question, for some of simple task such as update name, can we adjust the system to just update core memory without calling LLM to save trips to LLM. Ideally, we can implement a CoreMemoryUpdateRule Agent to decide what action requring calling which LLMs and that action require just calling local Python functions to update. If this is something of interests to you, I may design an improved architecure and sample code.

Hi! Find the LLMs as Operating Systems: Agent Memory category in the Short Course Q&A section!