A suggestion for OrderBot

I noticed a minor bug in the chatbot. The Orderbot still has access to all the world information. So I can ping random topics and talk with it, which should not be the case. It is kind of like leak of information and waste of resources. I would add a prompt saying “Avoid any other topic of discussion and come back to taking the order from the customer” and " You don’t have any other information other than this"

Yes, that’s an interesting observation.

The bot created in the course is just a basic example of how this technology can produce such amazing results with so little code. For a production-grade deployment, there are many other considerations, including the one you mention.

Have you attempted to improve it to cover the case you mention? if so, I’d love to see it! Would you share your improved code here? many other learners will benefit from this additional sophistication!

Thanks,

Juan

I’m still really new to this, but it seems wrong to me that the developer, sorry prompt engineer, needs to think of all the things to tell the system it can’t do. I spent 20 years learning how to tell computer exactly what it can do. Now it seems like the default is the system is full of surprises, even when it has been provided with guard rails. Is it just me?

My roots are also in the traditional software development paradigm. This is a new paradigm where data is the main subject, and code is just a few lines of code. Under this new paradigm we have to approach the development with a completely new mindset. In the case of LLM, we will need to learn how to guide the model to what it can or should do. One classic example right now is the RLHF (reinforcement learning from human feedback) used to tell ChatGPT what to say and what not to say.

1 Like

I just added those 2 lines mentioned above within the prompt. But sometimes, even that is not enough. I used the gpt-35-model.

One thing which worked is : adding the instruction in system message as well as user message. The instruction is user message acts as a reinforcement.

Same I did,
I made a chatbot customised to recommend course of Deeplearning.ai to a student and restrict it to only 4 current courses and if student ask any other subject/course to study guide him to search net.
example I asked chatbot to learn spanish ,it directed to search net .Ofcourse,in ahumble,witty way

1 Like

I think this type of issue is inherent to the “black box” approach of most LLMs, in that we don’t really tame the model, instead we let it do its thing, which means that it will find some ways to continue conversing about off-topic items even when instructed not to. This is something that’s an area of active research in generative AI, and hopefully new models will find ways to address this. (: