Fine-tuning VS using Embedding

Hi all,

I hope you are well.

I am Zakia Salod from South Africa.

Please help with advice on the following:
If I have, say 210 positive records, and 210 negative records in a dataset, how do I decide on whether I should fine-tune a model or use the embeddings of a model? And also, which LLM to try for the task? Any advice would be most appreciated.

Kind Regards,
Zakia Salod

1 Like

Deep learning specialization goes into the details of transfer learning (including fine-tuning) which is quite a lot of information to repeat in a single post.

Should you choose to use embeddings alone on a problem, rest of the model architecture needs to be defined by you and trained from scratch. Given that you have ~420 examples, Iā€™d recommend trying fine-tuning a little for your problem and comparing results across both approaches before moving forward.

As far as selecting a model is concerned, look at places where LLMs specific to tasks are listed on sites like this.