Running the Advanced RAG Applications lab locally

This is a great short course - thanks for organizing!

I’m noticing that a few people that want to run the lab locally have questions. Would you be able to post a short guide outlining requirements (e.g. modules that need to be installed) and other considerations (e.g. cost)?


Does this help?

Thanks, @balaji.ambresh! Yes - this is a step into the right direction but the thread is old so many modules etc. have changed or are no longer relevant.

It would be nice to have a “how to run this course on your local machine” guide that is specific to a course so that the folks that like to code along on their computers can do so easily.

1 Like

Thanks for confirming. I’ve already asked the staff about this and for some reason requirements.txt hasn’t been shared across any course.

Follow the imports from start of the assignment and there’ll be only a few that need digging. It’d be great if you and a bunch of other learners could start commenting on this topic to give a better picture of your needs to the staff.


I agree with @GC23 - a requirements.txt would be nice.

Also I wonder, since Llama is mentioned frequently (e.g. llama_index, TruLlama) - do we need an instance of Llama running, or no? Some clarification would help.

Thanks for the course and any comments!

1 Like

Please provide the link to the lecture / lab that’s causing you this confusion.

1 Like

Your implementation needs access to the following:

  1. Vector store
  2. LLM models used.

What do you mean by:

What’s the definition of Llama in code?

Locally running the file (Lesson 1) throws the below errors. btw, I have the latest llama_index and trulens_eval installed in my environment. I think the error is coming from trulens_eval. Can you please help me sort this out? I appreciate your support with this.

----> 9 from trulens_eval import (
10 Feedback,
11 TruLlama,
12 OpenAI
13 )
15 from import Groundedness
16 import nest_asyncio

File ~/.local/lib/python3.10/site-packages/trulens_eval/
95 from trulens_eval.tru_custom_app import instrument
96 from trulens_eval.tru_custom_app import TruCustomApp
—> 97 from trulens_eval.tru_llama import TruLlama
98 from trulens_eval.utils.threading import TP
100 all = [
101 ‘Tru’,
102 ‘TruBasicApp’,
115 ‘TP’
116 ]

File ~/.local/lib/python3.10/site-packages/trulens_eval/
41 from llama_index.schema import BaseComponent
43 # LLMs
—> 44 from llama_index.llms.base import LLM # subtype of BaseComponent
46 # misc
47 from llama_index.indices.query.base import BaseQueryEngine

ImportError: cannot import name ‘LLM’ from ‘llama_index.llms.base’ (/home/jovyan/.local/lib/python3.10/site-packages/llama_index/llms/

1 Like


pip install trulens-eval==0.18.1 llama-index==0.9.8

You can get package versions from inside the notebook like this:

After install:

>>> from trulens_eval import (
...     Feedback,
...     TruLlama,
...     OpenAI
... )


@balaji.ambresh Thank you very much. This worked for me.

1 Like


when I run the below code (code cell #13 in the jupyter notebook) locally

with tru_recorder as recording:
   for question in eval_questions:
      response = query_engine.query(question)

I get the following error:
openai request failed <class ‘openai.AuthenticationError’>=Error code: 401 - {‘error’: {‘message’: ‘Incorrect API key provided: ********************************. You can find your API key at’, ‘type’: ‘invalid_request_error’, ‘param’: None, ‘code’: ‘invalid_api_key’}}. Retries remaining=3.

I am stumped because my API key works well when I query without tru_eval (code cell: 7 in the jupyter notebook) .

response = query_engine.query("What are steps to take when finding projects to build your experience?")

I am only getting the error when I prefix this with “with tru_recorder as recording:”

Do you know why this is ? do I need a specific openAI API key to use tru_eval with OpenAI?

1 Like

Is your OPENAI_API_KEY set in the environment as shown in ?

1 Like

Hello @balaji.ambresh,

I see that in the file, you are importing
from trulens_eval import OpenAI,
in the main file, you are also using
from trulens_eval import OpenAI as fOpenAI
and you are calling openai = OpenAI(), and no model_name, and end_points are passed as arguments.

However, in my local environment I am using AzureOpenAI from llama_index
Also, for my embeddings model, I am using AzureOpenAI from llama_index. When i try to use openai = AzureOpenAI() instead to pass my model.

I am guessing this is the problem for the 'openai request failed <class ‘openai.AuthenticationError’>=Error code: 401 '. Is there a way for trulens_eval to accomodate embeddings / llm from llama_index and Azure_OpenAI instead ?

1 Like

I have no idea about azure. Please refer their docs / forums on how to set up the OPENAI access key. If an expert advice is required for your project, please hire a consultant.

I had to change to this version for the llama-index,

1 Like

Thanks for sharing, @Adam_Hjerpe
Please use the notebook versions of the libraries when setting up your environment if you’re new to programming in python.

Semantic versioning insists that major version should be incremented for incompatable API changes. If the functionality under the hood changes a lot without any change in just the API signature, some deleveper teams might just increase the patch / minor versions. For the best part, you don’t need to worry about this. Do be aware of this situation using a more recent version of a library than the one used in the exercise notebooks.