Langchain: Evaluation: when I run `qa.run(examples[0]["query"])`throw error

qa.run(examples[0]["query"]) throw validation error of:

ValidationError: 2 validation errors for DocArrayDoc
text
  Field required [type=missing, input_value={'embedding': [-0.0064034... -0.028614587546074315]}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.5/v/missing
metadata
  Field required [type=missing, input_value={'embedding': [-0.0064034... -0.028614587546074315]}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.5/v/missing

I downloaded the course notebook to my local and run it

1 Like

I recommend you use the “pencil” icon in the thread title, to move your message to the correct forum for the course you are attending.

It is more likely to be noticed by someone who can help you, if you put it where they are going to look for it.

1 Like

Hi @jessica_h

Welcome to the community.

What course are you referring to?

best regards

Hi, Elirod, Thanks for reaching out, I am referring to lession “LangChain for LLM Application Development”
course link: https://learn.deeplearning.ai/langchain/lesson/6/evaluation
Below is the screen shot of my notebook.

Here is the last part of the error message:

Hi @jessica_h

Looks like this error is regard to some missing files. Don’t forget to download all the files required

You can do that by follow:

  1. Click on the jupyter logo

  2. click and download the files one by one

Give it a try and let me know the results

best regards

1 Like

Thanks, Tom, will do

Hi, Rodrigo,
It runs in the downloaded notebook
but not in my local environment where I copy the code from downloaded notebook, and downloaded the csv file.
And you can see from the screen shot from my local notebook that the data[10] which contains the answer have name and metadata. So, it should be something with my own conda environment not working well with DocArrayDoc.
I will find the cause of this issue and update the solution here if I find anything useful
Thank you very much for your help!
Jessica

Oh, OK.

This is a good point. Maybe it is some kind of conflict on the packages version

I know what is the issue.
So the longchain version I installed in my notebook was 0.0.348 (the newest version, I believe?) , your notebook is with longchain version 0.0.179 , after I downgrade my longchain version to 0.0.179, it works


Would it be considered a bug of longchain version 0.0.348?

1 Like

I’d say it’s not a bug. It’s a design decision.

Folks who develop software packages rarely worry about backward compatibility with previous versions.

Yeah, agree. so I assume some functionalities in the old version might be deprecated in the newest version. I need to rewrite some code in order to have it work with the newest version of langchain.

1 Like

Well done @jessica_h!

Thanks for sharing your solution with us.

1 Like

It looks like this behavior is related to recent Pydantic version.
To make it work, I had to downgrade Pydantic to 1.10.9 which downgrade in turn langchain to 0.0.347

I would love to see the notebook updated, it is part of programming live cycle, at least point to a github repo with the newest code, even if the video are not.
Thanks

1 Like

How did you figure out which versions to downgrade to?

Hello, I am going through the course just now and managed to fix this for Lanchain V0.1.8:

!pip install chromadb
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma

docs = loader.load()

vectorstore = Chroma.from_documents(documents=docs,embedding=OpenAIEmbeddings())

llm = ChatOpenAI(temperature = 0.0, model=llm_model)
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type=“stuff”,
retriever=vectorstore.as_retriever(),
verbose=True,
chain_type_kwargs = {
“document_separator”: “<<<<>>>>>”
}
)

1 Like

Sorry it has been a while, but I believe you can run the command to check the version from the course notebook