Hi,
Greetings!!!
I was playing around with a ChatGPT bot and suddenly got an error for “Token Limit Exceeds” . I wanted to know how to handle such scenarios. I don’t want to lose the entire context of the chat in progress.
I would be very thankful if you suggest me some of the workarounds.
Regards,
Hi @TheConsultant ,
The context limit for GPT-3 models is 2048 tokens. Then the context limit for GPT-4 is 8000 tokens. There’s no way around it.
Try to reduce the size of your input.
Consider that for the model, the context size may be the sum of input + output.
Can you share more about your use case? may be I can come up with ideas on how to handle it?
Thanks,
Juan
1 Like
Hi @Juan_Olano ,
Thank you for your reply. For a more specific use case, let’s take an example of the same pizza order bot. The context of the bot gets increased with respect to the duration (number) of the conversation. For any bot, we build, say in this case, we want to give the bot below power :
- we want to have the bot implement a shopping cart where a user can add multiple items, say up to 5 (pizza configurations ) to the shopping cart.
- each time the user adds a new item with its configuration of toppings and all, the bot would check if the same configuration is already there in the cart
- if yes, handle the scenario by having a conversation with the user
- the bot has the superpower to analyze the items in the cart and recommend to the user other hot configurations of pizza, that are not in the cart, but are “must try”
- and so on.
In the end, the bot should remember all the items in the cart and calculate the bill, take the delivery address and place the order and share the eta with the customer.
So we can say that, if the number of text (messages) interchanged is in large number, then it’s definite that the context length limit - of 2048 will be exhausted soon, right?
I hope I was able to make my idea clear. Please let me know if you need any further input from my side 
It’s good to discuss it with you 
Thanks.
Hi @TheConsultant , I see what you are trying to achieve. It is definitively a bit more complex.
My first reaction is: this may need a bit more than a call to the api of the GPT model, and may be we need additional components or a more robust logic in the loop to keep track of this. For instance, after each answer, we could do a follow up call to the model to request the order in a json format and the script keeps track of this json as a ‘memory’ repository. This is just an initial, rudimentary idea, that needs more exploration.
The other idea, and this might be probably the first one to test is: try to make a more robust prompt that considers all these options you are mentioning. It might work.
As far as the context, I would also test to see how far I can go in this conversation. I am thinking that 2048 tokens could support a rather complex conversation. Not sure how complex but it should be rather complex.
I truly look forward for your comments on these experiments!