Invoke custom LLMs using the VectorStoreIndex api of llamaIndex

How do I invoke customized openAI or non openAI llms as part of the VectorStoreIndex ?

To use a customized language model (LLM) with the VectorStoreIndex , you’ll need to explicitly set it up in a serviceContext object. You can find some example sin the LlamaIndex documentation here : Large Language Models (LLMs) | LlamaIndex.TS.

How do I use an OpenAI model that hosted vis a proxy ? Like in LangChain OpenAI , I can pass base_url and headers. Don’t see that in llamaindex.ts OpenAI()