AgentState and checkpoint

Is it correct to state that agent state stores all the queries so far, but does not infer newer queries on old queries , where are checkpoints allow persistence of memory to allow follow-up questions on same topic.

Example
messages = [HumanMessage(content=“What is the weather in sf”)]
result = abot.graph.invoke({“messages”:messages})
result[‘messages’][-1].content

The weather in San Francisco is currently clear with a temperature of 54.0°F (12.2°C). The wind is blowing at 5.6 mph from the NNW direction. The humidity is at 97%.

messages = [HumanMessage(content=“What about la”)]
result = abot.graph.invoke({“messages”:messages})
result[‘messages’][-1].content

Los Angeles, often referred to as LA, is the second-largest city in the United States. It is known for its sports teams like the Los Angeles Dodgers, Los Angeles Rams


with checkpoint , the agent is able to retain the information that the question ‘what about la’ still refers to weather.

Hello Vdurga. Yes, the agent is able to retain information if is implemented state and persitence.

If I am not wrong (notice: I am learning LangGraph), by default with LangGraph there is no Agent with state and persistence, you have to build your own Agent and to assign state with persistence behavior, and you can customize persistence behavior as is explained in lessons related (I guess, Persistence and Stream is the first lesson related).

The availability of to define things like persistence behavior and to define Agent in LangGraph have important advantages, you can build Agents, state persistence behavior according to your needs. There are lots of scenarios where this is useful, but maybe your scenario could be solved without this kind of deep level. If this is your case, I suggest seeing some other frameworks, like Crewai (imhe still immature for complex-medium cases, but is ok for basic, linear or hierarchical arrange of flows and Agents) or Autogen. You will not find flexibility that LangGraph already have to build your flows, agents and behaviors, but state, persistence, threads, etc. are already pre-defined (this approach have a cost in flexibility).

I hope information be useful. If anyone consider incorrect, please feel free to add points of view to improve or fix the information given.

I think yes, it is correct to state that.

Further argumentation:
It seems to me that the Agent state as it is implemented, only stores the information for as long as the process is ongoing, for each user request. And once it has completed the user request, that conversation is lost, since further requests would call on the OpenAI model, without providing the information of previous requests.

By adding the checkpointer in the Agent’s __init__ , I assume we’re passing the information of each run, to the memory object, which can then be retrieved by the agent on each run. The details of this process of integrating the past messages into the current run is not totally clear to me, since it’s abstracted away. What I can tell is that it happens during the compilation of the graph

class Agent:
    def __init__(self, model, tools, checkpointer, system=""):
        ...
        self.graph = graph.compile(checkpointer=checkpointer)
        ...
    ...

I understand the process of storing information works in combination with the variable thread. This would allow to store the information of each run into the database, grouped into distinct conversations if necessary by using the "thread_id".

thread = {"configurable": {"thread_id": "1"}}