Through learning and following the Agentic AI course content here, I found myself repeatedly returning to the same realization: many of the hard problems in agentic AI are not about the model or prompt, but about the system around them.
As I started teaching Agentic AI concepts in practice, especially to engineers building real systems, the same questions kept coming up:
Where should reasoning stop and execution begin?
How do we evaluate behavior over long horizons?
Where do safety, permissions, and governance actually live?
This led me to formalize these ideas into AISA (Agentic AI Systems Architecture) ā a layered, implementation-neutral reference architecture that treats agentic AI as a composed system rather than a monolithic agent.
The extended technical write-up is here:
Iād be very interested to hear whether others here find this kind of system-level architectural perspective useful.
One could go a step further and conceptualize intelligence more holistically than is currently done. This would lead to a more social/processual/systemic view on AI in general. One could say that using the term āagentic AIā rather than āAI agentsā already takes a step in that direction.
I agree that treating intelligence as something emergent from processes and interactions, rather than localized within an āagent,ā is an important shift. In many ways, AISA tries to move in that direction by framing agentic behavior as a property of composed systems rather than individual entities.
The distinction you point out between āagentic AIā and āAI agentsā resonates strongly with our motivation: focusing on agentic behavior and system dynamics rather than reifying agents as standalone units.