In the second Chapter or Them LLM, the Chat format and tokens…when explaining the way to use LLM appears that the code should be from 1.0.0
client = openai.OpenAI()
def get_completion(prompt, model="gpt-3.5-turbo"):
messages = [{"role": "user", "content": prompt}]
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=0
)
return response.choices[0].message.content
But when i translate/migrate using: Apply a GritQL pattern
from openai import OpenAI
client = OpenAI(api_key=os.environ['OPENAI_API_KEY'])
So the chapter is wrong ? any differnce beween client = OpenAI(api_key=os.environ[‘OPENAI_API_KEY’]) and client = OpenAI() ??
Another thing how would be the code with async/await ?
Thanks