I downloaded all the files to run locally. However for the “3.4 Manually defining tools”, the response I got always have "function_call": null as if it just can’t detect the tools I passed on. The in-class notebook run without this issue.
I don’t know how to debug this. Can you help? thanks!
Looking at the aisuite code here, I get the impression that unless max_turns is set, the tools are not passed to the model call. This may be due to some modification to the current aisuite package, compared to the version used in the lab (maybe popping the kwargs?)
# Check environment variable before allowing multi-turn tool execution
if max_turns is not None and tools is not None:
return self._tool_runner(
provider,
model_name,
messages.copy(),
tools,
max_turns,
**kwargs,
)
# Default behavior without tool execution
# Delegate the chat completion to the correct provider's implementation
response = provider.chat_completions_create(model_name, messages, **kwargs)
return self._extract_thinking_content(response)
So if I were to implement this locally, I would just always set max_turns (e.g. to 1). But maybe there is some other explanation.
he did set the max_turns as the assignment at the end mentions it clearly. He is able to execute the codes. only function call are null, I think he needs to explicitly add those two agents tools to be recalled separately in the codes notebook he is running locally.
As shuangxia states, the exact same notebook is run locally. When max_turns is excluded, as in 3.4 in the notebook, the model does not detect the tool. The only tool in [tools] is get_current_time which was included, as shuangxia states in a previous comment. So there is something about the packages / local environment (I don’t believe the mcp package is required, though this is used in aisuite). When I check the code in aisuite, it appears no tool is passed to the model when max_turns is None. Given that the version of aisuite installed by shuangxia may not be the same as the one used in the lab, and given that the kwargs are popped rather than simply used, this could explain why no tool is passed to the model (as they are not passed explicitly through a tools variable, and neither through the kwargs). Anyway, so much for my two cents. As stated, I would simply always set max_turns if using this version of aisuite. But let’s see what shuangxia reports.
I cannot get the manual calls to work no matter what I try with aisuite 0.1.14 . I know the course uses 0.1.11. I have posted my Not working Notebook to gist here. I tried a few times by myself to get the manual schema definition to work , including using tools_choice = “required” and other Gemini AI suggested workarounds. But no matter what I tried aisuite 0.1.14 strips out the tool.
I also had to use PR 262 in aisuite 0.1.14 since there was a bug that called ismcp_config.
Add response_manual results of an api call in the cell where you got response_manual not defined. It should work.
import json
response2 = None
# --- ASsign'response_manual' the result of an API call ---
# Example placeholder in the cell for which you got response_manual not defined at the beginning.
# response_manual = myaisuite_client.chat.completions.create(
# model="openai:gpt-4o",
# messages=messages,
# tools=tools,
# )
# Create a condition in case tool_calls is in response object
if response_manual.choices[0].message.tool_calls:
# Pull out the specific tool metadata from the response
tool_call = response_manual.choices[0].message.tool_calls[0]
args = json.loads(tool_call.function.arguments)
# Run the tool locally (ensure 'get_current_time' function is defined)
tool_result = get_current_time(args['timezone_str'])
# Append the result to the messages list
# Note: 'response' variable is also undefined in the original snippet,
# assuming you meant 'response_manual' or another defined variable.
messages.append(response_manual.choices[0].message)
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": str(tool_result)
})
# Send the list of messages with the newly appended results back to the LLM
response2 = myaisuite_client.chat.completions.create(
model="openai:gpt-4o",
messages=messages,
tools=tools,
)
print(response2.choices[0].message.content)
else:
# Handle the case where no tool calls were made in the first response
print(response_manual.choices[0].message.content)
I tried repeatedly with a manual complete schema , not matter what aisuite 0.1.14 strips the function call. Using max_turns and forcing newer implementation makes it work.
I updated the gist with the non-working schema and it does not give a tool_calls schema. For now I am going to not try to get the manual tool calling to work with 0.1.14
json obtained with attempt at manual tool call.
Updated gist: M3UngradedLab_tool_hjlearn.ipynb · GitHub
{
“id”: “chatcmpl-D0TiTfNR1V1cWvXQhweufaIhVRDxi”,
“choices”: [
{
“finish_reason”: “stop”,
“index”: 0,
“logprobs”: null,
“message”: {
“content”: “I’m unable to provide real-time information or current time updates. However, you can easily find the current time in Boston, MA, by checking a world clock website, using a smartphone, or searching online. Boston is in the Eastern Time Zone (ET), which is UTC-5 hours during Standard Time or UTC-4 hours during Daylight Saving Time, depending on the time of year. Daylight Saving Time usually starts on the second Sunday in March and ends on the first Sunday in November.”,
“refusal”: null,
“role”: “assistant”,
“annotations”: [],
“audio”: null,
“function_call”: null,
“tool_calls”: null
}
}
],
“created”: 1769006601,
“model”: “gpt-4o-2024-08-06”,
“object”: “chat.completion”,
“service_tier”: “default”,
“system_fingerprint”: “fp_deacdd5f6f”,
“usage”: {
“completion_tokens”: 99,
“prompt_tokens”: 17,
“total_tokens”: 116,
“completion_tokens_details”: {
“accepted_prediction_tokens”: 0,
“audio_tokens”: 0,
“reasoning_tokens”: 0,
“rejected_prediction_tokens”: 0
},
“prompt_tokens_details”: {
“audio_tokens”: 0,
“cached_tokens”: 0
}
}
}
Hi , I could reproduce the code in the course with aisuite 0.1.11 , exactly as recommended. The manual tool call Notebook where I fixed my requirement for max_turns is here below.