Logging Differences Between LangChain's AgentExecutor and LangGraph Components

When using LangChain’s AgentExecutor with the verbose flag set to True, I could see all the intermediate steps the agent took, including tool invocations, results obtained, and transitions to the next steps.

However, with use of LangGraph, the logging seems much more limited. In the chapter on LangGraph Components, my understanding is that the graph itself functions as a form of AgentExecutor. Therefore, we don’t use AgentExecutor directly.

For example, when invoking the graph with a question like ‘Who won the Super Bowl in 2024? What is the GDP of that state?’, we can see the steps of calling tools sequentially (e.g., the specific phrase used in the search), but we can’t see the outputs of those searches or the subsequent reasoning by the model. We jump straight to the answer instead.

Why is this the case? Should I, as a developer, handle this by designing a more extensive prompt or implementing additional logging myself?

Thanks for your replies.