Hi,
I am trying to understand if the function calling capability of the LLMs enables them to call the same function/tool iteratively in the same model invocation or if it doesn’t work that way? Here is an example:
I am creating an Agent with LangGraph. I have a node in which an LLM should call a tool to perform some work.
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model=“gpt-4o”, temperature=0)
Binding the tool to the model
model_with_tools = model.bind_tools(
[get_company_info_tool], # List of tools
parallel_tool_calls=False # Ensure sequential execution
)
In an Agent node where the model should call the tool, I am invoking the model in the following way:
company_info_messages = [
SystemMessage(content=GET_COMPANY_INFO_PROMPT),
HumanMessage(content=f"Ticker symbols list: {ticker_list}")
]
company_response = model_with_tools.invoke(company_info_messages)
What I am expecting, and what is also clearly stated in the SystemMessage Prompt, is that the Model will iterate over the ticker_list and call the get_company_info_tool for each element in the ticker_list and then provide a combined output. It does it but only for the first element in the list.
I can easily iterate over the list and call the tool from the framework without the LLM help but I wanted to see and understand if the LLM can do it.
Any feedback would be appreciated.