I am a bit lost. Hope someone help me understand better. As demonstrated in the video, when the prompt is not related to any subchain in the router chain, the prompt will be passed to default chain, which will fetch the reply using the original LLM, OpenAI. If so, why do we need to create chains at all? The LLM can very well provide answers to all the queries as I understand.
Hi @vsrinivas
It sounds like you’re talking about the router chain example.
In this example they are using “Expert Prompting” I think is the correct term for it.
Expert Prompting is when you explicitly prompt the LLM to be an expert in a subject. By giving this prompt to the LLM you are hoping to influence the generated output towards that of an expert vs the average text found in its training data and as a consequence have a better answer to your query.
In the router example the LLM has a few experts to choose from and Default (the original input text).
So that’s one call to the LLM to choose an expert and a second call to produce the final answer.
Do you need a chain to do this? No, you can always just call the API directly, but the abstraction of Chains makes creating these interactions with the LLM easier.
Also you don’t have to stick to the same LLM in a Chain Sequence. Using chains makes combining your prompts, models (LLMs) and outputs easier and gives a common structure for people to follow.
Hope this helps!
Sam
Thanks for the clarification. Generally, I got the point. I guess only after extensively using and experimenting with such models, one can really appreciate the value of these expert models. In fact, I guess only after that it is appropriate comparing or commenting. Nevertheless, the course is simple and easy to follow. Thanks a lot.