How do I call the print_llm_response? It worked in other lessons, but it’s not working in this assignment.
Do you need to use that function in this lesson?
Everything you need should already be provided with the lesson notebook.
Hi @gracekwak,
If something is not present in the assignment, then it means it is not provided. Everything you need to complete the assignment is already given to you in the notebook.
On that note, please be sure to not include extra code lines in the assignment, otherwise you’d run into errors in the grader.
Thanks,
Mubsi
The instructions to write a prompt by asking LLM to generate an email. The “get_llm_response” command produces an error. I imagine this is the same issues Grace found.
What’s not clear is that on one hand we’re suppose to use a LLM to write an email and yet, there’s an example of how we might word the email and the fact that we have flexibility as if we’re going to craft a response. Given the example provided, we could conceivably bypass the LLM and tell Python to craft a standard email with a template we write using the required data fields OR if we’re to use a LLM as specified in the instructions, “print” or “get” commands should work.
Please clarify.
By the way, I see the optional section at the bottom but seems like they should have been included above.
Hi @KKV,
The optional LLM generation has been put at the bottom intentionally.
In case openai services come across an error, we don’t want learners to feel stuck or blocked until those services are fixed. This is why building the prompt is graded, while generating the response is optional.
Also, the reason we have suggested how to phrase the prompt is intentional as well, so that it is easily grade-able. Otherwise, prompts can be subjective, and have endless possibilities to write them, making them hard to grade.
If this has been cleared and you are still encountering an error, I have sent you a direct message, please follow the instructions there.
Best,
Mubsi