Help in model training strategies (PEFT/LORA + RAG)

I am new to generative AI and working on a project that relies on AI capability for recommendation. The problem is in the risk assessment where the data set is not readily available and I am not able to find any fine tune model in various hubs like hugging face which I can use as a base foundation model.
The project will have a rest interface. No natural language interaction.

I am now thinking about fine-tuning it. Here are my thoughts:
Starting with a large amount of customer-specific data. I use this to fine-tune a model (which I still have to find). What fine-tuning strategy is best? I dont want to update all the pre trained model weights. I read about PEFT/LORA(lora keeps the pre train weights while adding new layers for weight adjustments) , how expensive it is to train a model from hugging face with customer data set for a specific task. Are most models allow such fine tuning?

Secondly, we we use the model, we get more data which we also need to incorporate in the model prediction. Shall I use RAG over a fine-tuned (PEFT/LORA) model?

How effective (and expensive) is to refine/retrain using Peft technique on a regular basis e.g every month after we collected large amout of customer data post initial peft fine tuning. I expect that we collect large number of custmer data over a course of a month an rather than using these a RAG, i could again use this data to further retrain it

Thanks