From Learning Agentic AI to Teaching It: A Systems-Level Perspective

Hello Community,

Through learning and following the Agentic AI course content here, I found myself repeatedly returning to the same realization: many of the hard problems in agentic AI are not about the model or prompt, but about the system around them.

As I started teaching Agentic AI concepts in practice, especially to engineers building real systems, the same questions kept coming up:

  • Where should reasoning stop and execution begin?
  • How do we evaluate behavior over long horizons?
  • Where do safety, permissions, and governance actually live?

This led me to formalize these ideas into AISA (Agentic AI Systems Architecture) — a layered, implementation-neutral reference architecture that treats agentic AI as a composed system rather than a monolithic agent.

The extended technical write-up is here:

I’d be very interested to hear whether others here find this kind of system-level architectural perspective useful.

1 Like

Hi omarnj,

Your position makes good sense to me.

One could go a step further and conceptualize intelligence more holistically than is currently done. This would lead to a more social/processual/systemic view on AI in general. One could say that using the term ā€˜agentic AI’ rather than ā€˜AI agents’ already takes a step in that direction.

Thanks — I really appreciate that perspective.

I agree that treating intelligence as something emergent from processes and interactions, rather than localized within an ā€œagent,ā€ is an important shift. In many ways, AISA tries to move in that direction by framing agentic behavior as a property of composed systems rather than individual entities.

The distinction you point out between ā€œagentic AIā€ and ā€œAI agentsā€ resonates strongly with our motivation: focusing on agentic behavior and system dynamics rather than reifying agents as standalone units.

1 Like