LLM function calling question

Hi,

I am trying to understand if the function calling capability of the LLMs enables them to call the same function/tool iteratively in the same model invocation or if it doesn’t work that way? Here is an example:

I am creating an Agent with LangGraph. I have a node in which an LLM should call a tool to perform some work.

from langchain_openai import ChatOpenAI
model = ChatOpenAI(model=“gpt-4o”, temperature=0)

Binding the tool to the model

model_with_tools = model.bind_tools(
[get_company_info_tool], # List of tools
parallel_tool_calls=False # Ensure sequential execution
)

In an Agent node where the model should call the tool, I am invoking the model in the following way:

company_info_messages = [

    SystemMessage(content=GET_COMPANY_INFO_PROMPT),
        HumanMessage(content=f"Ticker symbols list: {ticker_list}")
    ]

company_response = model_with_tools.invoke(company_info_messages)

What I am expecting, and what is also clearly stated in the SystemMessage Prompt, is that the Model will iterate over the ticker_list and call the get_company_info_tool for each element in the ticker_list and then provide a combined output. It does it but only for the first element in the list.

I can easily iterate over the list and call the tool from the framework without the LLM help but I wanted to see and understand if the LLM can do it.

Any feedback would be appreciated.

And to reply on my own question, since I figured out the answer. :sweat_smile:

An LLM can call the same tool multiple times in the same model invocation as soon as in the model-tool binding the parallel_tool_calls=True.

model_with_tools = model.bind_tools(
[get_company_info_tool], # List of tools
parallel_tool_calls=True # Parallel vs sequential execution. Set to parallel=true currently
)

You need to adjust the SystemMessage to specify that the model should call the tool for each ticker in the list and aggregate the results. Alternatively, iterating over the list in your code might be more reliable for now.

Hey @DriftLau ,

The prompt was fine, what caused the problem was that the parallel_tool_calls was set to false which resulted in just one tool call.