when running the last cell,
response = llm.predict_and_call(
[vector_query_tool, summary_tool],
"What is a summary of the paper?",
verbose=True
)
I am getting this response:
=== Calling Function ===
Calling function: summary_tool with args: {"input": "The paper discusses the impact of climate change on biodiversity and ecosystems."}
=== Function Output ===
The paper does not discuss the impact of climate change on biodiversity and ecosystems.
which is not even close to the response in the video. The problem is with the input
passed to the summary_tool
. I ran the notebook multiple times, and got the same response every time.
In the video, the summary_tool is called with
=== Calling Function ===
Calling function: summary_tool with args: {"input": "Please provide a summary of the paper."}
Who is in fault here? gpt-3.5-turbo or the LlamaIndex?