Deal with long-context in machine reading comprehension

Hi everyone,
I am doing research in NLP field, and my task is question answering from a large scale collection of document. These document is very long (such as textbook or novel). There is a way to do it is truncation. But when truncate it, the document definitely lost a lot of information. So is there any way better to deal with long-context ?

How about RAG ?

You mean like split the long document into array of sentences and then find the most k relevant sentences and feed these as contexts into the encoder ?

Correct.