Hi Everyone,
In the module 5 we learned about the “planning with code execution” pattern. In real world systems, how does it compare to building “tools” for safe actions and then letting LLM choose the correct tool, instead of writing the whole code by itself. The former is restrictive because it only supports predefined operations whereas the latter is powerful but introduces safety risks.
I’m not clear on what is the value add of this patter. It looks like a GenAI pattern to me than a Agentic AI patter.
1 Like
Hi amkhanna,
Great discussion point.
It seems to me that this would be something to brainstorm about for particular use cases.
Reasons to use planning with code execution is that devs may not know how to build particular tools or that it may not be practically feasible to build all tools that may be applicable in a particular use case.
You are absolutely right that this brings risks, which means that this approach may not suit use cases for which such risk is unacceptable, or that it may be essential to bring humans in the loop at critical points. For use cases in which risk is acceptable, planning with code execution can bring added functionality and flexibility.
hi @amkhanna
An important concern to address upon especially when in the real world, agentic ai are being advertised as trump card for revolutisation of different industry sectors.
If you ask me, as long as I know my LLM’s ability to execute a task, generate code, coherent with workflow and production outcome, LLM’s limitations, restrictions and strategic approach, both approach need to be selected with reinforcement human feedback if safety risk is too critical like agent handling human’s healthcare records, agent handling flight scheduling which altered and halted a running airport into chaos leading to professional and personal loss.
I had recently heard a beautiful Q&A session by Andrew Ng and Lawrence at Stanford on LinkedIn where he clearly mentions the most important thing to address when a company or project is looking for agentic tool based investment is why do they want to invest in agentic tool, what is major concern currently while human involvement resources, and what one is looking to establish by introducing agentic tool
So once a team or individual handling agents or llms have this core base foundation, the risk analysis, safety analysis of any agentic tool can be approach in constructive and progressive way no matter you face challenges, setbacks or loss.
In case, if you want to listen the video, here is link 
[Q&A by Andrew and Lawrence at Stanford]( An absolute must for everyone looking for career advice in AI-By Prof. Andrew Ng and Laurence Moroney )
Regards
Dr Deepti