Prompt Tuning for Large Language Models

Hi All, Most of are familiar with Prompt engineering now and thanks to All-New Short Course ChatGPT Prompt Engineering for Developers to teach us. However, I am more interested in Prompt tuning which doesn’t require us to write prompts every-time, instead it can choose prompts which is best suited from vector space.

Is there any course which can teach us Prompt tuning . Any blogs, videos which can walk us to implement it from scratch.

Interesting point. I see no such course in Coursera. There are quite a few YouTube post to recommend prompt, although they are based on authors’ experience not by theory. Prompt engineering course gives us a hint of words (e.g. “expand”) to make good use of how GPT model is trained. Other general guidance is to give good and enough information on which LLM works to answer you.

Prompt Tuning: A short description in Hugging Face repos. I hope AN creates a course on these too. Parameter-Efficient Fine-Tuning using 🤗 PEFT