Prompt Engineering, Expanding section exercises

Greetings, Learners:
I’m working on the Expanding section of the Prompt Engineering course, and I got an issue with results of its exercises. While using two sentiment options, “negative” and “positive”, at the end of a review, I’m getting the same “positive” answer,
“Thank you for taking the time to share your review with us. We greatly appreciate your feedback and are thrilled to hear about your experience with our [product/service]…”
An identical answer for the temperature variation, temperature=0 and temperature=0.7, is returned with the same “positive” result.
I’m using the following configuration for the prompt:
import openai
import os
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
openai.api_key = os.getenv(‘OPENAI_API_KEY’)
def get_completion(prompt, model=“gpt-3.5-turbo”,temperature=0): # Andrew mentioned that the prompt/ completion paradigm is preferable for this class
messages = [{“role”: “user”, “content”: prompt}]
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temperature, # this is the degree of randomness of the model’s output)
return response.choices[0].message[“content”]
prompt = f"“” … “”"
review = f"“” … “”"
sentiment = “negative”
response = get_completion(prompt, temperature=0)
print(response)

Does anyone go through this problem? A helpful suggestion would be much appreciated.

Please use the “pencil” icon in the thread title, and move your thread to the correct forum area for your course.