ChatGPT model tuning

Can someone point me to resources/links/courses that describe how I can tune already built LLMs for my own domain, say bioinformatics or networking? I have tried using ChatGPT apis but the answers it provides are very general.


The best place to start is in the OpenAI’s documentation site:


Now, from what I’ve seen in multiple forums and groups, fine-tuning tends to be miss-understood. Most times people expect that after a fine-tuning process, the model will respond within the boundaries of the fine-tuned data, and respond exactly based on that corpus. That is not exactly the case.

If you can, please share a bit about your project and I might be able to provide some more guidance on how to go about it.

Thanks Juan,

Thanks for the link. I do see your point that the model will not be confined by the corpus on which it has been tuned but I want it to answer at least in that context. For example, the answer to question ‘how is my service’ should be different depending on which domain the model has been tuned in: In case of hospitality business it should be answering about the overall stay and how the guest has been treated (reception, room service, etc.) but if its in networking domain then it should give a different answer such as: Ping/Tracerout is slow/good, down/up speed etc. Moreover, meaning of terms could be different: heartbeat has a total different connotation in medical vs in software services. I hope this is enough context.

I don’t think you’ll get exactly that. With fine-tuning you will be able to teach your instance the tone, and may be the format, but the content produced by the model can be far from your corpus. At least this has been my experience with the models I have fine-tuned. Probably the best thing to do is test and see how it comes out?