I’m doing the “Building Agentic RAG with Llamaindex” short course.
It would be very interesting to see the actual prompts sent to the LLM.
I have been able to see the “template” prompts used by the query engines using
vector_query_engine.get_prompts()
Is there a way to see the actual prompts that have been sent to the LLM to get the responses? (and, if possible, also the raw responses from the LLM?)
Thanks!