Please help understand the purpose of agentic search? That is - why is LLM node query Tavily?
For example, I could execute the same query in GPT. so can we not query openai instead of performing an agentic search?
Hello Vdurga. Large Language Models does not come with the behavior you saw in OpenAI ChatGPT. What you are seeing in ChatGPT is a similar pattern that is explained in the lesson 2 and 3. Maybe Openai is using an own ‘homemade’ tool, with a similar purpose of Tavily.
For weather scenario of the examples in lesson 2 and 3, ChatGPT already have available tools, but regarding scenarios where there is no tool, lesson 2 and 3 gives an example about how to solve these cases.
I’m not sure.
I think the purpose of using Agentic Search is that it uses specialized tools such as Tavily, to provide curated search results, relevant to each step of our agentic workflow where web search is required.
In the case example of “Lesson 2: LangGraph components” a promp asking about the weather is used. This is simple enough that a chatGPT prompt like the one you shared would be enough to solve the problem. However Agentic Search can be integrated into a more complex multi-step process where the list of URLs returned by Tavily is used to feed other parts of the application that together create an application with extensive capabilities beyond what an LLM model alone has.
In other words, I think the purpose of Agentic search may be to provide curated relevant links to each process step of our agentic workflow where that information is needed.
Why to use this new tool, Tavily, to do that curated search instead of the gpt model?. I’m not sure. I think perhaps because when we connect though OpenAI’s API to the the gpt-4o model, what we get is access only to the model, and not to the tools that the chatGPT interface uses in combination with the model, to connect with the internet and provide current information.