Has anyone successfully fine-tuned an LLM to incorporate company-specific knowledge? From my experience, fine-tuning helps models learn the preferred output format for information. I’m particularly interested in approaches for full fine-tuning aka re-training on proprietary data. Additionally, is full fine-tuning feasible for this purpose?"
@Manisha_Singh I haven’t had a chance to play with this myself yet, so can’t claim to be an expert. But I’ve been directing everyone to Sharon’s short course to start.
She is very good and knows what she is doing:
ofcourse this is feasible based on the stakeholders privacy concern, any llm model can be fine tuned based on the kind of information one is looking for.
I hope you know that to fully fine tune llm model includes various approach from RAG to RLHF, to a simple temperature tuning a model. There are various methods, but the most important criteria as a programmer would be to choose the set of fine tuning methods which provides you the best output based on the data available to you. So fine tuning is just not about the model but also about data one is going to use to create a model or retrain a model.
I hope you got the jist!!!
Regards
DP