MEMORY FINETUNNING: Data preparation for Chat. I only have long chunks of proprietary text data

I’m planning to do memory fine-tune on an instruction-based model to replace a RAG and utilize proprietary data.

The language model will primarily serve for chat purposes and other instructional tasks. I have long chunks of proprietary text data. No pairs of QA. How should I proceed with the finetunning? Should I fine tune in text complition?

Any guidance or tips would be greatly appreciated!