All the examples in this course appear to assume you’re willing to use OpenAI, but in many situations (e.g. corporate settings with proprietary data), this option is a non-starter. Is there a way to apply the trulens-eval library to, say, locally hosted Huggingface models, along with prompts for performing the triad metrics? Or do we have to cobble it together directly from any available source code?
Good question and I had the same. Quickly checking trulens source code I figured out that they seem to support LLMs from huggingface. For example, see the code regarding groundedness.
“The groundedness_provider can either be an LLM provider (such as OpenAI) or NLI with huggingface.”
Haven’t testet myself but it might be possible without any customization.
I am trying to get trulens_eval feedback work without openAI (using huggingface model ) off of the platform and with llama-2
when executing the following code,
with tru_recorder as recording:
llm_response = chain(prompt_input,
It still looks for openAI key as follows:
OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
Anyone has procedure to work trulens_eval with Llama-2 models ?
@msaravanan can you please share your code to use Huggingface model for trulens evaluation?