Chatbot chapter in ChatGPT Prompt Engineering for Developers

Hi, The chatbot chapter code in the course needs updating for API 1.0.0. to avoid the error that the chat completions API is no longer supported, I believe. I tried that by applying the same code as in the Introductions chapter. Also installed the panel module which is required. But trying to run the code, I am getting following error, not sure what it means. Is there an updated code for the chatbot chapter posted anywhere?

TypeError: ‘ChatCompletionMessage’ object is not subscriptable

I have read some other chatbot related posts here. By now, the versions for bokeh are far beyond the 2.4 that somebody mentioned. 3.1.2 is my version, if that matters. There are dependencies which prevent me from going back to version 2.4.

Hi @AkinArikan

Are you trying to running locally?

Hi, Thank you for checking. Yes, running locally on a Mac.

The proposed solution is as below


def get_completion(prompt, model="gpt-3.5-turbo"):
    messages = [{"role": "user", "content": prompt}]
    response = client.chat.completions.create(model=model,
    messages=messages,
    temperature=0)
    return response.choices[0].message.content


def get_completion_from_messages(messages, 
                                 model="gpt-3.5-turbo", 
                                 temperature=0, 
                                 max_tokens=500):
    response = openai.chat.completions.create(
        model=model,
        messages=messages,
        temperature=temperature, # this is the degree of randomness of the model's output
        max_tokens=max_tokens, # the maximum number of tokens the model can ouptut 
    )
    return response.choices[0].message.content

This should work


def get_completion_and_token_count(messages,model="gpt-3.5-turbo",temperature=0,max_tokens=500):
    response = openai.chat.completions.create(
        model=model,
        messages=messages,
        temperature=temperature,
        max_tokens=max_tokens,
    )
    content = response.choices[0].message.content
    #token_dict={ 
     #   'prompt_tokens':response['usage']['prompt_tokens'],
     #   'completion_tokens':response['usage']['completion_tokens'],
     #   'total_tokens':response['usage']['total_tokens'],
    #}

    token_dict = {
  'prompt_tokens': response.usage.prompt_tokens,
  'completion_tokens': response.usage.completion_tokens,
  'total_tokens': response.usage.total_tokens,
}
        return content, token_dict


messages = [
{'role':'system', 
 'content':"""You are an assistant who responds in the style of Dr Seuss."""},    
{'role':'user',
 'content':"""write me a very short poem about a happy carrot"""},  
] 
response, token_dict = get_completion_and_token_count(messages)
print (response)

The “Chatbot” chapter in ChatGPT Prompt Engineering for Developers is a key section that focuses on how to design and implement conversational experiences using prompt engineering techniques. It goes beyond just writing basic prompts and dives into structuring multi-turn conversations, maintaining context, and simulating state within stateless API calls — all of which are critical for effective chatbot app development.

One of the most valuable parts of the chapter is how it demonstrates prompt chaining and structured memory emulation, which are essential when building scalable and intelligent chatbot apps without persistent memory. It also covers best practices for handling user intent, crafting system messages, and using role-based prompting (e.g., system/user/assistant roles) to control the behavior of your chatbot.

If you’re involved in chatbot app development, this chapter offers practical examples and guidance that can help you create more responsive and context-aware bots, whether you’re working in customer support, education, e-commerce, or any domain where conversational AI is useful.