Seeking advice to incorporate knowledge to LLM

I am building a AI assistant that help to analyze some documents. The issue is:

  • I have a large common knowledge base (all in PDFs, that provide fundamental knowledge of the domain).
  • I have a smaller, specific file (PDF also) that contains the problem description that we need to deal it.

In order to process the smaller file, one (human) needs to first read through the big PDF file to have the domain knowledge, then process the small file.

How could I let the LLM learn the big file first? I know that we can do fine-tuning the LLM but is there other ways, as fine-tuning is quite expensive?

Many thanks

Check the free short courses we have here, I think (but Im not sure) there some platforms there, probably langchain where you can feed large documents to the LLM and it can learn from them!

1 Like