I completed the lab per instructions, passed the tests and the autograder as well. However, the final E2E output did not look right to me - the refined_report from Exercise 1 was not reflected in the output from Exercise 2 and 3. Instead Exercise 2 and 3 were producing generic content (not related to the prompt) but in the right format.
I was able to fix this by adding an additional assistant role and passing the report value in messages in Exercise 2 and 3 as follows. This step is not covered in instructions. Is my approach correct?
messages=[
# System prompt is already defined
{"role": "system", "content": (system_prompt)},
# Add user prompt
{"role": "user", "content": (user_prompt)},
# This is what I added
{"role": "assistant", "content": (report)},
],
please share the screenshot of e2e output which you stated that it didn’t look right to you, and. then post the output once you included the role of assistant to content, because the same content is being used for system role.
Although your approach has now converted agentic toool system into more autonomous approach, where assistant task is related to reflection and convert report in exercise 2 and 3, acting now as a separate agentic tool prompt, ofcourse giving a better output response which is perfectly right .
If you read the first paragraph of the lab, explanation is more focused on single role approach from using external tools to generate research workflow through web search and then use gpt LLM to generate the report; and that is the most probable reason why assignment is focused on assigning role to system only.
It’s like both (your approach asss well as lab) are RAG architecture where your agentic tool is more like assiging a separate assistant to revise, reflect and convert a report to html where as the original lab is focused on system tool for performing all 3 tasks. Both approach are perfectly right.