Passing JSON as input to LLM

Hi all!

I’m trying to build a RAG using llamaindex. I created 2 indices (one for the JSON files another for HTMLs and PDFs) when I create 2 query engines and pass it to an agent as tools. The model is not able to query the JSON and provide response. Most of the time the model uses implicit information to give a response and it is always wrong and sometimes it says the documents shared doesnt have the required data.

I am planning to pass the JSONs to the LLM without creating a vector index. Will it work?

Any suggestions on how the model can understand the query and retrieve the data from the JSON will be helpful.

Thanks!

Hi.
I did work with vectorized JSON files and LLMs, and it works fine. I don’t know how you are vectorizing the JSONs, but what I do is vectorize the content of the JSON, not the JSON directly: I extract the info I want to vectorize from the JSON, then I vectorize that info.

Let me know if your problem is in that way.

Hi @haroldc , I passed the json files to simpledirectoryreader and converted the docs to vector index. And I am using an open source LLM zephyr-7b-beta.

Instead of vectorizing, I tried using the JSONalyze query engine and it is better. But still the usage of prior knowledge of the LLM is still there even after prompting it NOT to use it.

How are your JSON files?

Mines are like this:
JSON = {
‘page_content’: ‘The text or data I want to embedd’,
‘other_things’: ‘Here are the data I use for metadata’
}