Planning Workflows for Highly Autonomous Agents: When early tool results change the rest of the plan

Hi all,

I enjoyed the course! I have a concise question about the Planning Workflows for Highly Autonomous Agents

Example (customer‑service agent)

Tools: get_item_descriptions, check_inventory, get_item_price, check_past_transactions, process_item_sale, process_item_return.

User query: “Do you have any round sunglasses in stock that are under $100?”

The planner generates:

  1. get_item_descriptions → filter “round sunglasses”.

  2. check_inventory on the filtered SKUs.

  3. get_item_price on SKUs that are in stock.

  4. Return the first SKU that meets price < 100.

Problem:
If step 1 returns no matching items, or step 2 shows all are out of stock, or step 3 shows all prices ≥ $100, the plan must change (e.g., broaden the search, suggest alternatives, ask the user for clarification). The next tool to call depends on the actual output of the previous step.

Question:
How are we supposed to handling such dynamic replanning?

Thanks!

Perhaps we can add extra tools like “suggest_items_based_on_description” that perhaps will rank available SKUs according to the description? In anyway we would need to redesign the workflow.

And I think this is necessary because Chatbot does not really execute any tool function. We need to receive a response from the Chatbot, check if it requests for a tool, then we have code that executes the requested tool, and we prompt the chatbot with the response from the tool. Since this is necessary, I won’t consider it as part of the problem.

I mean the plan is the happy flow. So if all tools return what is expected, then the plan works.
But if the tools do not return what is expected then, the plan might need to change midway through.

It’s a happy flow, and an example workflow and it also just demonstrates one possible path of a more complete workflow. I believe a real system requires a more complete workflow that takes care of all possibilities.