In Guidelines\Tactics I received RateLimitError

I’m in the Guidelines\Tactics section of the coding and got the RateLimitError below. I’m using a new API key and I’ve barely used ChatGPT3.5.

How do I fix this?

Thanks,
Reza.


RateLimitError Traceback (most recent call last)
Cell In[7], line 19
2 text = f"“”
3 You should express what you want a model to do by \
4 providing instructions that are as clear and \
(…)
12 more detailed and relevant outputs.
13 “”"
14 prompt = f"“”
15 Summarize the text delimited by triple backticks \
16 into a single sentence.
17 {text}
18 “”"
—> 19 response = get_completion(prompt)
20 print(response)

Cell In[5], line 3, in get_completion(prompt, model)
1 def get_completion(prompt, model=“gpt-3.5-turbo”):
2 messages = [{“role”: “user”, “content”: prompt}]
----> 3 response = openai.ChatCompletion.create(
4 model=model,
5 messages=messages,
6 temperature=0, # this is the degree of randomness of the model’s output
7 )
8 return response.choices[0].message[“content”]

File /usr/local/lib/python3.9/site-packages/openai/api_resources/chat_completion.py:25, in ChatCompletion.create(cls, *args, **kwargs)
23 while True:
24 try:
—> 25 return super().create(*args, **kwargs)
26 except TryAgain as e:
27 if timeout is not None and time.time() > start + timeout:

File /usr/local/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py:153, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, **params)
127 @classmethod
128 def create(
129 cls,
(…)
136 **params,
137 ):
138 (
139 deployment_id,
140 engine,
(…)
150 api_key, api_base, api_type, api_version, organization, **params
151 )
→ 153 response, _, api_key = requestor.request(
154 “post”,
155 url,
156 params=params,
157 headers=headers,
158 stream=stream,
159 request_id=request_id,
160 request_timeout=request_timeout,
161 )
163 if stream:
164 # must be an iterator
165 assert not isinstance(response, OpenAIResponse)

File /usr/local/lib/python3.9/site-packages/openai/api_requestor.py:226, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout)
205 def request(
206 self,
207 method,
(…)
214 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
215 ) → Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]:
216 result = self.request_raw(
217 method.lower(),
218 url,
(…)
224 request_timeout=request_timeout,
225 )
→ 226 resp, got_stream = self._interpret_response(result, stream)
227 return resp, got_stream, self.api_key

File /usr/local/lib/python3.9/site-packages/openai/api_requestor.py:620, in APIRequestor._interpret_response(self, result, stream)
612 return (
613 self._interpret_response_line(
614 line, result.status_code, result.headers, stream=True
615 )
616 for line in parse_stream(result.iter_lines())
617 ), True
618 else:
619 return (
→ 620 self._interpret_response_line(
621 result.content.decode(“utf-8”),
622 result.status_code,
623 result.headers,
624 stream=False,
625 ),
626 False,
627 )

File /usr/local/lib/python3.9/site-packages/openai/api_requestor.py:683, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
681 stream_error = stream and “error” in resp.data
682 if stream_error or not 200 <= rcode < 300:
→ 683 raise self.handle_error_response(
684 rbody, rcode, resp.data, rheaders, stream_error=stream_error
685 )
686 return resp

RateLimitError: You exceeded your current quota, please check your plan and billing details.

Other people have reported seeing this error and message recently, both in this forum and on the OpenAI forum. <you should always try search on a key word, like RateLimitError>

You might take a look at your own account status like this:

Also review the API rate limits at OpenAI API

2 Likes