Prompt based task training


After the first week of the course, it looks like you can train an LLM to perform multiple tasks by adding “sets of prompts” extending the capabilities of the SAME model. Therefore, you can add prompts for Sequenc2Sequence, Instance detection, Sentiment Analysis, and so on. Of course, this comes at the price of preparing new datasets for the new prompts. In addition, we can fine-tune a model to “customize it” for a specific context (legal, medical,…etc) by passing in a new set of examples solutions for a given “prompt” (with the risk of suffering from “Catastrophic forgetting”).

Did I understand it correctly?

Thank you!

You posted this in the General Discussions category. Please move it to the relevant course category, as described here and our mentors will be happy to assist you.

Sorry, rookie mistake.

Thanks for providing a link with the instructions.