I loved the course very much but I have one question which I think could have been answered is that how can a person stop the chatbot from deviating to different personality. for example:- suppose I built a professor chatbot and the user told him to behave like a normal bot… it will start behaving like chatgpt in few attempts. Is there any way to solve this issue. Please help me I am facing this issue a lot.
Not really… chatGPT is brittle to adversarial prompting and the “further” back in time the base prompt is, the harder it is for the chatbot to grasp.
One common tactic is to have the chatbot reemphasize it’s purpose so that it always stays primed. But this can be annoying when it keeps repeating the same thing every prompt.
The new ChatML system, currently available in the playground and API only, will provide system role messages that help improve this issue. OpenAI has finetuned their models to pay more attention to the system roles than user messages so it’s better at maintaining its original purpose.
Look up ChatXML which is a similar approach that can be used in chatGPT as a user message. It’s better as a system role but work quite well in chat too.
I would think about structuring the bot response in two different layers. One layer that keeps track of the objectives of the conversation, and the second layer as the chatbot. This way the chatbot will be acting as an interface with the user, but following the guidelines of the bottom layer. That way, there shouldn’t be much option to deviate from the main objective. I would appreciate if someone has some feedback about this, idea or perhaps has some experience developing a similar (or totally different) solutions. Thanks