Lesson 3, Let's add some other tools! section - response is very different form video

when running the last cell,

response = llm.predict_and_call(
    [vector_query_tool, summary_tool], 
    "What is a summary of the paper?", 
    verbose=True
)

I am getting this response:

=== Calling Function ===
Calling function: summary_tool with args: {"input": "The paper discusses the impact of climate change on biodiversity and ecosystems."}
=== Function Output ===
The paper does not discuss the impact of climate change on biodiversity and ecosystems.

which is not even close to the response in the video. The problem is with the input passed to the summary_tool. I ran the notebook multiple times, and got the same response every time.

In the video, the summary_tool is called with

=== Calling Function ===
Calling function: summary_tool with args: {"input": "Please provide a summary of the paper."}

Who is in fault here? gpt-3.5-turbo or the LlamaIndex?

2 Likes

I received the exact same issue. Is it a result of hallucinating? Hopefully, someone here can give us an answer.

1 Like

Still the same problem in 2025. It seems more like the LLM behind the SummaryIndex function is “comtaminated”. If I prompt “the MetaGPT paper”, the output would be correct, but somehow “the paper”/“this paper” always return this strange climate change text. I also test individual functions, only summary_tool has this issue.

Btw, I am very disappointed by the deteriorating quality of the courses by deeplearning.ai—not only are the courses becoming more like advertisements for tools, but the user interface also discourages interaction and discussion among learners.