Hi guys, I am working on a side project and I am kinda stuck at a point, so some guidance would be really appreciated.

I have a json file (which could be really really large), it has some key-value pairs. One of the key is of ‘customers’ whose value is a list of dictionary of key-value where dictionary consists of information of each customer. I want LLM to answer and return some functions (which will be executed locally) based on the question that user asks for a candidate.

I was earlier planning to fit it into context but this file could be really really large. I thought of using vector DBs to store the embeddings but this doesn’t seems to work well. I was wondering if I can do function calling to find the relevant dictionary and fit it into context but I am not really sure, how should I query and retrieve it locally? I am thinking that user might want to know about a customer, but we can have multiple customers of same name as well? There is also a possibility that a particular customer that user queries doesn’t exists. Are there any convenient approaches to follow instead of hardcoding the query and retrieve function which could be really reliable?

1 Like