Compared to the information it already has based on the training data.
Hi @Sid72
In a RAG implementation, the LLM knows to look at external sources based on:
-
Query Analysis: Detects queries requiring up-to-date or specific information → In this scenario the LLM identifies that the query requires current information that is not in its training data which is static up to a certain date.
-
Confidence Scoring: Uses internal confidence metrics; low confidence in its own data causes external retrieval.
-
Fallback Mechanisms: Automatically resorts to external sources if initial responses are inadequate or fail (you can set these in frameworks like LangChain, etc.).
-
Explicit Instructions
Hope this helps! if you need further assistance feel free to ask ![]()
3 Likes