Confusing lack of separation between Abstractions (Agent vs Task) and (Goal vs Backstory)

There is no clear intuitive separation between abstractions such as

  1. Agent vs Task
  2. within an Agent** - Goal vs. Backstory.
  • What logic or intent belongs intrinsically to the Agent (via role, goal, backstory), versus what should be expressed in the Task description? In many examples throughout the course, there is significant overlap. e.g. in Module 1 graded notebook, Part of task says

review_security = Task(
  description=(
  . . .
  "Use the SerperDevTool to find the most relevant security best practices from OWASP "
  “and pass the URLs to the ScrapeWebsiteTool to get detailed information.”
  ),

It doesn’t make sense to me why you are defining what tool to use in the TASK abstraction. What tool to use for a given task is something that AGENT should figure out!!! Why are you putting that in the TASK? That should be put in the AGENT abstraction!
Analogically,
Task = Join two pieces of wood.
Agent = Figures out, I need to use screw, hammer, etc. TO DO THIS GIVEN TASK

There is an obvious lack of semantic separation between these abstractions, which is very confusing.

  • Encoding behavior in a Task like shown in the lecture becomes redundant or conflicting with the Agent’s goal or backstory.

  • How to avoid duplication or ambiguity when both abstractions can influence reasoning and behavior?

Let me illustrate another instance of the same 'Lack of distinction between Agent and Task abstractions problem. Think of a Guardrail. Intuitively a guardrail is meant to control an AGENT (not a TASK) by selectively blocking its output. Therefore intuitively guardrails are meant to be applied to Agents, so it doesn’t make sense why you’d pass guardrail to TASK instead of AGENT abstraction. The framework is supposed to make it intuitively easy for developers to put these pieces together . Here the intuitive hotchpotch between Agent and Task abstractions is making it more confusing.

As an ML engineer looking to learn this framework, I need proper UNAMBIGUOUS recipie for drawing a distinction/boundary between these abstractions that matches real life developer intuition.

Hi,

I do share some of your concern - especially between the goal, backstory and backstory and task.

While i am not the instructor or a mentor of this course but here my $0.02:

Goal / backstory - are the general characteristics/capabilities of the agent - not always specific to the task, e.g. the agent can have a wider scope. Like you are a senior developer so you know how to review PR, do design…. so it can work in different scenarios all related to the capabilities of the agent.

The task is the definition of the work and any set of hints to help to achieve the task according to certain constraints. For example you want the agent to use a specific tool, even if there is a lot of tool available. This help to make the agent outcome more predictable.

Guardrails are to ensure a task is done according to some criteria, so this is definitely a task level relation. Here you can argue that it seems to be also part of the task definition that is no need for a separate type - for example in “summarize the input document in 50 words” already have the constraint 50 words, so why i should have a guardrail (GR) that validate the number of words is 50? Just because LLM are not always following the instructions to the letter. So you add a validation via Guardrails. And they are at the task level because you can have a task that says summarize with 50 words max and another task that says summarize with 100 words only. So you will have different GR for different Tasks. Remember the result of the GR make the feedback loop to the LLM, saying for example yeah you summarized the doc but you did not pay attention to the “50 words” ask.

Similarly you can have different tasks of summarization, each one very specific. but you will have a single agent which goal is to summarize documents. The agent knows how to summarize, the task fix certain constraints. It allows also to reuse agent, instead of defining very very specific/narrow agents.