Streaming the final output ONLY

My question is related to → Persistence and Streaming → Streaming tokens.

How can I stream only the final response to a user’s question? The current code, also given below, streams the ‘tool calling’ and ‘Back to Model’, as well.

messages = [HumanMessage(content="What is the weather in SF?")]
thread = {"configurable": {"thread_id": "4"}}
async for event in abot.graph.astream_events({"messages": messages}, thread, version="v1"):
    kind = event["event"]
    if kind == "on_chat_model_stream":
        content = event["data"]["chunk"].content
        if content:
            # Empty content in the context of OpenAI means
            # that the model is asking for a tool to be invoked.
            # So we only print non-empty content
            print(content, end="")

Output:

Calling: {'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_d1uenpZrgwcruC5UZpiWEyYx'}
Back to the model!
The current weather in San Francisco involves considerable cloudiness with occasional rain showers. The humidity is around 66%, and the wind is coming from the SSW at 10 mph. The high temperature is expected to be in the upper 60s.