I want to understand if there is any specific advantage of extracting/tagging data through function calls when you can ask this through a normal prompt itself to the LLM. I find it a bit weird that the output is being extracted by means of asking the LLM to identify the input arguments for a given function. Instead of going through such a convoluted path, why not just ask LLM directly to do these tasks?
Do you trust a LLM to do this for you?
Why would I trust the function calling option more? They are both using the same LLM to extract the same info. The only difference is in the format that you are asking the model to evaluate the text and respond.
If you have the LMM label its own data, you’re letting it grade its own homework.
Where are we asking the LLM to grade its own work?
Just to make sure there is no confusion, I will clarify my question again. At a high level, we are sending instructions to the LLM to tag/extract text from the given content. Now we can provide the instructions to the LLM either through normal user input (messages) or through the functions argument.
I want to understand if there is any specific benefit to providing instructions through the function argument instead of just using the messages argument.