Chatbot chapter in ChatGPT Prompt Engineering for Developers

Hi, The chatbot chapter code in the course needs updating for API 1.0.0. to avoid the error that the chat completions API is no longer supported, I believe. I tried that by applying the same code as in the Introductions chapter. Also installed the panel module which is required. But trying to run the code, I am getting following error, not sure what it means. Is there an updated code for the chatbot chapter posted anywhere?

TypeError: ‘ChatCompletionMessage’ object is not subscriptable

I have read some other chatbot related posts here. By now, the versions for bokeh are far beyond the 2.4 that somebody mentioned. 3.1.2 is my version, if that matters. There are dependencies which prevent me from going back to version 2.4.

Hi @AkinArikan

Are you trying to running locally?

Hi, Thank you for checking. Yes, running locally on a Mac.

The proposed solution is as below


def get_completion(prompt, model="gpt-3.5-turbo"):
    messages = [{"role": "user", "content": prompt}]
    response = client.chat.completions.create(model=model,
    messages=messages,
    temperature=0)
    return response.choices[0].message.content


def get_completion_from_messages(messages, 
                                 model="gpt-3.5-turbo", 
                                 temperature=0, 
                                 max_tokens=500):
    response = openai.chat.completions.create(
        model=model,
        messages=messages,
        temperature=temperature, # this is the degree of randomness of the model's output
        max_tokens=max_tokens, # the maximum number of tokens the model can ouptut 
    )
    return response.choices[0].message.content

This should work


def get_completion_and_token_count(messages,model="gpt-3.5-turbo",temperature=0,max_tokens=500):
    response = openai.chat.completions.create(
        model=model,
        messages=messages,
        temperature=temperature,
        max_tokens=max_tokens,
    )
    content = response.choices[0].message.content
    #token_dict={ 
     #   'prompt_tokens':response['usage']['prompt_tokens'],
     #   'completion_tokens':response['usage']['completion_tokens'],
     #   'total_tokens':response['usage']['total_tokens'],
    #}

    token_dict = {
  'prompt_tokens': response.usage.prompt_tokens,
  'completion_tokens': response.usage.completion_tokens,
  'total_tokens': response.usage.total_tokens,
}
        return content, token_dict


messages = [
{'role':'system', 
 'content':"""You are an assistant who responds in the style of Dr Seuss."""},    
{'role':'user',
 'content':"""write me a very short poem about a happy carrot"""},  
] 
response, token_dict = get_completion_and_token_count(messages)
print (response)