LLM Switching Issue

Why is it getting too hectic to switch from one llm to another in a RAG pipeline for example Mistral’s 7b chat v1 to llama 2 or some other models? Why is it being too involving to modify most of the adaptations in the framework - both langchain and llama index? Isn’t there anything like plug and play? If there exists no other way to try out different models for a task or gets too involved for a specific llm, how will it be possible to explore the high keys of other models? Or if there’s some efficient way to do so, kindly drop them below! Suggestions are welcome.

Not yet. The industry is way too immature for standards to have developed.

1 Like

So shifting from one llm to another will still remain a nightmare!

Yes, until the rate of change slows down.