Does updates of new relevant info to LLM permanently/temporarily update the LLM's model

Does updates of new relevant info to LLM permanently/temporarily update the LLM’s model

The RAG is supposed to send the relevant info from its document base that are other wise not available to the LLM generally.

The RAG clarifies and narrows down to the suspected information which can be helpful for the LLM to answer efficiently and quickly.

So if the same query is asked again does RAG or LLM has a temp space storing the latest updates in a model or database format.

Or the RAG runs for the same prompt again and again as that would waste resources and will be less optimized.

Im not sure how RAG works but if the model is not further trained on this data that the RAG brings on during use, then that information is saved or retrieved each time as the RAG is operating.