System and User prompt in CrewAI

Are the agent’s role, goal, and backstory properties combined into the system prompt of a large language model (LLM)?

Are the task’s description and expected output properties combined into the user prompt of a large language model (LLM)?

The course author has used GPT3.5 at one place and then he also has used GPT 4 for this agent’s role which is large language model.

I think when you go through his explanation videos, you will be able to see this part.

I am not sure what you mean by this choice of word??


Hi @mhossain,

Do you mean this specific to the notebook code for the course or generally for LLMs ?

planner = Agent(
role=“Content Planner”,
goal=“Plan engaging and factually accurate content on {topic}”,
backstory="You’re working on planning a blog article "
“about the topic: {topic}.”
"You collect information that helps the "
"audience learn something "
"and make informed decisions. "
"Your work is the basis for "
“the Content Writer to write an article on this topic.”,
plan = Task(
"1. Prioritize the latest trends, key players, "
“and noteworthy news on {topic}.\n”
"2. Identify the target audience, considering "
“their interests and pain points.\n”
"3. Develop a detailed content outline including "
“an introduction, key points, and a call to action.\n”
“4. Include SEO keywords and relevant data or sources.”
expected_output="A comprehensive content plan document "
"with an outline, audience analysis, "
“SEO keywords, and resources.”,

If we use ChatGPT (or any other LLM), we need to craft and provide the Prompt to the LLM. Based on the Prompt, the LLM’s response is dependent.

How are prompts crafted in CrewAI? Are they based on the Agent’s three properties named role, goal, and backstory as the SYSTEM PROMPT and the Task’s two properties named description and expected output as the USER PROMPT?

No, they just go into the user prompt (i.e. not the system prompt).
For details, check out
, pretty easy to read.