Error Example Guidelines / Tactics1: rror: Windows is not supported yet in the migration CLI

Hi all,
when running the example for Guideline / Tactic 1 I receive this message:
You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at GitHub - openai/openai-python: The official Python library for the OpenAI API for the API.

You can run openai migrate to automatically upgrade your codebase to use the 1.0.0 interface.

Alternatively, you can pin your installation to the old version, e.g. pip install openai==0.28

A detailed migration guide is available here: v1.0.0 Migration Guide · openai/openai-python · Discussion #742 · GitHub

When running “!openai migrate” I receive this message: “Error: Windows is not supported yet in the migration CLI”

Can anybody help?

Have you tried pinning your installation to the 0.28 version?

I just discovered that if I run the Examples on the WebSite with the Video on the left handed side, anything seems to work. So everything seems to work inside the website provided from DeepLearning.
I’ ve got the issues when copying the code in a freshly installed Jupyter

Yes, I have tried to pin my installation to th 0.28 version by running this command in Jupyter:
!pip install openai==0.28
but I’m receiving this Error Message:

Building wheels for collected packages: aiohttp, frozenlist, multidict, yarl
Building wheel for aiohttp (pyproject.toml): started
Building wheel for aiohttp (pyproject.toml): finished with status ‘error’
Building wheel for frozenlist (pyproject.toml): started
Building wheel for frozenlist (pyproject.toml): finished with status ‘error’
Building wheel for multidict (pyproject.toml): started
Building wheel for multidict (pyproject.toml): finished with status ‘error’
Building wheel for yarl (pyproject.toml): started
Building wheel for yarl (pyproject.toml): finished with status ‘error’
Failed to build aiohttp frozenlist multidict yarl

Then I copy all the commands from the Website into my Jupiter and if Irun the prompt I still receive this error:

That indicates problems with having the correct versions of all of the tools and packages in your local installation.

Solving this issue is left to the student. The mentors have no ability to debug your local installation.

Note: I’m not a mentor for this course, but perhaps someone who is will be able to provide more guidance.

The solution is :

In the latest OpenAI package the response.choices object type is changed and in this way you must read the response:

  print(response.choices[0].message.content)

Changes needed in the code of the course:

“”" Change this:
import openai
import os

from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())

openai.api_key = os.getenv(‘OPENAI_API_KEY’)
“”"

To this:

from openai import OpenAI
client = OpenAI()

“”" Change this:
def get_completion(prompt, model=“gpt-3.5-turbo”):
messages = [{“role”: “user”, “content”: prompt}]
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=0, # this is the degree of randomness of the model’s output
)
return response.choices[0].message[“content”]
“”"

To this:

def get_completion(prompt, model=“gpt-3.5-turbo”):
messages = [{“role”: “user”, “content”: prompt}]
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=0, # this is the degree of randomness of the model’s output
)
return response.choices[0].message.content