In the first lesson: AI Python for Beginners: Basics of AI Python Coding, I have come across a difficulty to follow the lesson because my jupyter notebook doesn’t have helper_functions, so I couldn’t import “print_llm_response” to continue the lesson. can anyone explain and how to solve the problem? Thanks!
See this link of deeplearning AI website: Jupyter Notebook
This helper_functions.pyhelper_function.py
file runs at background of this code at website but as this file not on ur computer that is why use are not able to run in your notebook. using your open ai key at place mentioned below you need to create following helper_function.py file
import os
from openai import OpenAI
from dotenv import load_dotenv
import csv
Get the OpenAI API key from the .env file
load_dotenv(‘.env’, override=True)
openai_api_key = os.getenv(‘OPENAI_API_KEY’)
Set up the OpenAI client
client = OpenAI(api_key=openai_api_key)
def print_llm_response(prompt):
“”“This function takes as input a prompt, which must be a string enclosed in quotation marks,
and passes it to OpenAI’s GPT3.5 model. The function then prints the response of the model.
“””
try:
if not isinstance(prompt, str):
raise ValueError(“Input must be a string enclosed in quotes.”)
completion = client.chat.completions.create(
model=“gpt-3.5-turbo-0125”,
messages=[
{
“role”: “system”,
“content”: “You are a helpful but terse AI assistant who gets straight to the point.”,
},
{“role”: “user”, “content”: prompt},
],
temperature=0.0,
)
response = completion.choices[0].message.content
print(““*100)
print(response)
print(””*100)
print(“\n”)
except TypeError as e:
print(“Error:”, str(e))
def get_llm_response(prompt):
“”“This function takes as input a prompt, which must be a string enclosed in quotation marks,
and passes it to OpenAI’s GPT3.5 model. The function then saves the response of the model as
a string.
“””
completion = client.chat.completions.create(
model=“gpt-3.5-turbo-0125”,
messages=[
{
“role”: “system”,
“content”: “You are a helpful but terse AI assistant who gets straight to the point.”,
},
{“role”: “user”, “content”: prompt},
],
temperature=0.0,
)
response = completion.choices[0].message.content
return response
def get_chat_completion(prompt, history):
history_string = “\n\n”.join([“\n”.join(turn) for turn in history])
prompt_with_history = f"{history_string}\n\n{prompt}"
completion = client.chat.completions.create(
model=“gpt-3.5-turbo-0125”,
messages=[
{
“role”: “system”,
“content”: “You are a helpful but terse AI assistant who gets straight to the point.”,
},
{“role”: “user”, “content”: prompt_with_history},
],
temperature=0.0,
)
response = completion.choices[0].message.content
return response
Thanks for your detailed reply, Rajan! I appreciate it!
You are Welcome
The link to the python file gives a 502 error (There are currently no service instances available to serve your request), because I understand it is on your server.
Could you attach a sample of the file so I can edit it with my API key. Where should I place this file in the same folder as the examples? I am running VScode with Jupyter notebook extension. Thank you very much.
Hi all,
The helper_functions.py
file can be found in the workspace of your lab. You can access the workspace of your lab by doing (Menu-->)File-->Open...
.
You’d need to place the .py
file in the same folder as your notebook.
Best,
Mubsi
Thank you Mubsi
Hi Mubsi,
Thanks for the useful information!
Best,
Leyu
Hi all,
I have my openai api key in my openai account.
but i got this error “OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable”
when using downloaded helper_function.py to my project folder.
what could be the problem? do i need to modify helper_function.py?
Thanks,
Leyu
RateLimitError Traceback (most recent call last)
Cell In[1], line 4
1 from helper_functions import print_llm_response, get_llm_response, get_chat_completion
3 # Example usage
----> 4 print_llm_response(“What is the weather like today?”)
File ~\helper_functions.py:19, in print_llm_response(prompt)
17 if not isinstance(prompt, str):
18 raise ValueError(“Input must be a string enclosed in quotes.”)
—> 19 completion = client.chat.completions.create(
20 model=“gpt-3.5-turbo-0125”,
21 messages=[
22 {
23 “role”: “system”,
24 “content”: “You are a helpful but terse AI assistant who gets straight to the point.”,
25 },
26 {“role”: “user”, “content”: prompt},
27 ],
28 temperature=0.0,
29 )
30 response = completion.choices[0].message.content
31 print(“*” * 100)
File ~\anaconda3\Lib\site-packages\openai_utils_utils.py:274, in required_args..inner..wrapper(*args, **kwargs)
272 msg = f"Missing required argument: {quote(missing[0])}"
273 raise TypeError(msg)
→ 274 return func(*args, **kwargs)
File ~\anaconda3\Lib\site-packages\openai\resources\chat\completions.py:668, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, parallel_tool_calls, presence_penalty, response_format, seed, service_tier, stop, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
633 @required_args([“messages”, “model”], [“messages”, “model”, “stream”])
634 def create(
635 self,
(…)
665 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
666 ) → ChatCompletion | Stream[ChatCompletionChunk]:
667 validate_response_format(response_format)
→ 668 return self._post(
669 “/chat/completions”,
670 body=maybe_transform(
671 {
672 “messages”: messages,
673 “model”: model,
674 “frequency_penalty”: frequency_penalty,
675 “function_call”: function_call,
676 “functions”: functions,
677 “logit_bias”: logit_bias,
678 “logprobs”: logprobs,
679 “max_tokens”: max_tokens,
680 “n”: n,
681 “parallel_tool_calls”: parallel_tool_calls,
682 “presence_penalty”: presence_penalty,
683 “response_format”: response_format,
684 “seed”: seed,
685 “service_tier”: service_tier,
686 “stop”: stop,
687 “stream”: stream,
688 “stream_options”: stream_options,
689 “temperature”: temperature,
690 “tool_choice”: tool_choice,
691 “tools”: tools,
692 “top_logprobs”: top_logprobs,
693 “top_p”: top_p,
694 “user”: user,
695 },
696 completion_create_params.CompletionCreateParams,
697 ),
698 options=make_request_options(
699 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
700 ),
701 cast_to=ChatCompletion,
702 stream=stream or False,
703 stream_cls=Stream[ChatCompletionChunk],
704 )
File ~\anaconda3\Lib\site-packages\openai_base_client.py:1260, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1246 def post(
1247 self,
1248 path: str,
(…)
1255 stream_cls: type[_StreamT] | None = None,
1256 ) → ResponseT | _StreamT:
1257 opts = FinalRequestOptions.construct(
1258 method=“post”, url=path, json_data=body, files=to_httpx_files(files), **options
1259 )
→ 1260 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File ~\anaconda3\Lib\site-packages\openai_base_client.py:937, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
928 def request(
929 self,
930 cast_to: Type[ResponseT],
(…)
935 stream_cls: type[_StreamT] | None = None,
936 ) → ResponseT | _StreamT:
→ 937 return self._request(
938 cast_to=cast_to,
939 options=options,
940 stream=stream,
941 stream_cls=stream_cls,
942 remaining_retries=remaining_retries,
943 )
File ~\anaconda3\Lib\site-packages\openai_base_client.py:1026, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
1024 if retries > 0 and self._should_retry(err.response):
1025 err.response.close()
→ 1026 return self._retry_request(
1027 input_options,
1028 cast_to,
1029 retries,
1030 err.response.headers,
1031 stream=stream,
1032 stream_cls=stream_cls,
1033 )
1035 # If the response is streamed then we need to explicitly read the response
1036 # to completion before attempting to access the response text.
1037 if not err.response.is_closed:
File ~\anaconda3\Lib\site-packages\openai_base_client.py:1075, in SyncAPIClient._retry_request(self, options, cast_to, remaining_retries, response_headers, stream, stream_cls)
1071 # In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a
1072 # different thread if necessary.
1073 time.sleep(timeout)
→ 1075 return self._request(
1076 options=options,
1077 cast_to=cast_to,
1078 remaining_retries=remaining,
1079 stream=stream,
1080 stream_cls=stream_cls,
1081 )
File ~\anaconda3\Lib\site-packages\openai_base_client.py:1026, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
1024 if retries > 0 and self._should_retry(err.response):
1025 err.response.close()
→ 1026 return self._retry_request(
1027 input_options,
1028 cast_to,
1029 retries,
1030 err.response.headers,
1031 stream=stream,
1032 stream_cls=stream_cls,
1033 )
1035 # If the response is streamed then we need to explicitly read the response
1036 # to completion before attempting to access the response text.
1037 if not err.response.is_closed:
File ~\anaconda3\Lib\site-packages\openai_base_client.py:1075, in SyncAPIClient._retry_request(self, options, cast_to, remaining_retries, response_headers, stream, stream_cls)
1071 # In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a
1072 # different thread if necessary.
1073 time.sleep(timeout)
→ 1075 return self._request(
1076 options=options,
1077 cast_to=cast_to,
1078 remaining_retries=remaining,
1079 stream=stream,
1080 stream_cls=stream_cls,
1081 )
File ~\anaconda3\Lib\site-packages\openai_base_client.py:1041, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
1038 err.response.read()
1040 log.debug(“Re-raising status error”)
→ 1041 raise self._make_status_error_from_response(err.response) from None
1043 return self._process_response(
1044 cast_to=cast_to,
1045 options=options,
(…)
1049 retries_taken=options.get_max_retries(self.max_retries) - retries,
1050 )
RateLimitError: Error code: 429 - {‘error’: {‘message’: ‘You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: h ttps://platform.openai.com/docs/guides/error-codes/api-errors.’, ‘type’: ‘insufficient_quota’, ‘param’: None, ‘code’: ‘insufficient_quota’}}
This is the error I’m getting can anyone explain how to resolve this
RateLimitError Traceback (most recent call last)
Cell In[1], line 4
1 from helper_functions import print_llm_response, get_llm_response, get_chat_completion
3 # Example usage
----> 4 print_llm_response(“What is the weather like today?”)
File ~\helper_functions.py:19, in print_llm_response(prompt)
17 if not isinstance(prompt, str):
18 raise ValueError(“Input must be a string enclosed in quotes.”)
—> 19 completion = client.chat.completions.create(
20 model=“gpt-3.5-turbo-0125”,
21 messages=[
22 {
23 “role”: “system”,
24 “content”: “You are a helpful but terse AI assistant who gets straight to the point.”,
25 },
26 {“role”: “user”, “content”: prompt},
27 ],
28 temperature=0.0,
29 )
30 response = completion.choices[0].message.content
31 print(“*” * 100)
File ~\anaconda3\Lib\site-packages\openai_utils_utils.py:274, in required_args..inner..wrapper(*args, **kwargs)
272 msg = f"Missing required argument: {quote(missing[0])}"
273 raise TypeError(msg)
→ 274 return func(*args, **kwargs)
File ~\anaconda3\Lib\site-packages\openai\resources\chat\completions.py:668, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, parallel_tool_calls, presence_penalty, response_format, seed, service_tier, stop, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
633 @required_args([“messages”, “model”], [“messages”, “model”, “stream”])
634 def create(
635 self,
(…)
665 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
666 ) → ChatCompletion | Stream[ChatCompletionChunk]:
667 validate_response_format(response_format)
→ 668 return self._post(
669 “/chat/completions”,
670 body=maybe_transform(
671 {
672 “messages”: messages,
673 “model”: model,
674 “frequency_penalty”: frequency_penalty,
675 “function_call”: function_call,
676 “functions”: functions,
677 “logit_bias”: logit_bias,
678 “logprobs”: logprobs,
679 “max_tokens”: max_tokens,
680 “n”: n,
681 “parallel_tool_calls”: parallel_tool_calls,
682 “presence_penalty”: presence_penalty,
683 “response_format”: response_format,
684 “seed”: seed,
685 “service_tier”: service_tier,
686 “stop”: stop,
687 “stream”: stream,
688 “stream_options”: stream_options,
689 “temperature”: temperature,
690 “tool_choice”: tool_choice,
691 “tools”: tools,
692 “top_logprobs”: top_logprobs,
693 “top_p”: top_p,
694 “user”: user,
695 },
696 completion_create_params.CompletionCreateParams,
697 ),
698 options=make_request_options(
699 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
700 ),
701 cast_to=ChatCompletion,
702 stream=stream or False,
703 stream_cls=Stream[ChatCompletionChunk],
704 )
File ~\anaconda3\Lib\site-packages\openai_base_client.py:1260, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1246 def post(
1247 self,
1248 path: str,
(…)
1255 stream_cls: type[_StreamT] | None = None,
1256 ) → ResponseT | _StreamT:
1257 opts = FinalRequestOptions.construct(
1258 method=“post”, url=path, json_data=body, files=to_httpx_files(files), **options
1259 )
→ 1260 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File ~\anaconda3\Lib\site-packages\openai_base_client.py:937, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
928 def request(
929 self,
930 cast_to: Type[ResponseT],
(…)
935 stream_cls: type[_StreamT] | None = None,
936 ) → ResponseT | _StreamT:
→ 937 return self._request(
938 cast_to=cast_to,
939 options=options,
940 stream=stream,
941 stream_cls=stream_cls,
942 remaining_retries=remaining_retries,
943 )
File ~\anaconda3\Lib\site-packages\openai_base_client.py:1026, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
1024 if retries > 0 and self._should_retry(err.response):
1025 err.response.close()
→ 1026 return self._retry_request(
1027 input_options,
1028 cast_to,
1029 retries,
1030 err.response.headers,
1031 stream=stream,
1032 stream_cls=stream_cls,
1033 )
1035 # If the response is streamed then we need to explicitly read the response
1036 # to completion before attempting to access the response text.
1037 if not err.response.is_closed:
File ~\anaconda3\Lib\site-packages\openai_base_client.py:1075, in SyncAPIClient._retry_request(self, options, cast_to, remaining_retries, response_headers, stream, stream_cls)
1071 # In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a
1072 # different thread if necessary.
1073 time.sleep(timeout)
→ 1075 return self._request(
1076 options=options,
1077 cast_to=cast_to,
1078 remaining_retries=remaining,
1079 stream=stream,
1080 stream_cls=stream_cls,
1081 )
File ~\anaconda3\Lib\site-packages\openai_base_client.py:1026, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
1024 if retries > 0 and self._should_retry(err.response):
1025 err.response.close()
→ 1026 return self._retry_request(
1027 input_options,
1028 cast_to,
1029 retries,
1030 err.response.headers,
1031 stream=stream,
1032 stream_cls=stream_cls,
1033 )
1035 # If the response is streamed then we need to explicitly read the response
1036 # to completion before attempting to access the response text.
1037 if not err.response.is_closed:
File ~\anaconda3\Lib\site-packages\openai_base_client.py:1075, in SyncAPIClient._retry_request(self, options, cast_to, remaining_retries, response_headers, stream, stream_cls)
1071 # In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a
1072 # different thread if necessary.
1073 time.sleep(timeout)
→ 1075 return self._request(
1076 options=options,
1077 cast_to=cast_to,
1078 remaining_retries=remaining,
1079 stream=stream,
1080 stream_cls=stream_cls,
1081 )
File ~\anaconda3\Lib\site-packages\openai_base_client.py:1041, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
1038 err.response.read()
1040 log.debug(“Re-raising status error”)
→ 1041 raise self._make_status_error_from_response(err.response) from None
1043 return self._process_response(
1044 cast_to=cast_to,
1045 options=options,
(…)
1049 retries_taken=options.get_max_retries(self.max_retries) - retries,
1050 )
RateLimitError: Error code: 429 - {‘error’: {‘message’: ‘You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: h ttps://platform.openai.com/docs/guides/error-codes/api-errors.’, ‘type’: ‘insufficient_quota’, ‘param’: None, ‘code’: ‘insufficient_quota’}}
Can anyone help me to resolve the error mentioned above
Based on the error message, you seem to have executed too many calls to the OpenAI API and now you don’t have enough quota to do more calls. You can check in this community how other students have done when faced with this issue.
but when I checked the limit in my OpenAI web, there it was not shown exceeded.
@SakshamBansal, this thread is about “How to have helper_functions in python”.
Is your reply here on-topic for that?
But sir this error is related to helper_functions therefore I posted it
Your problem does not seem to be with the helper_functions().
It appears that you have used up your quota of processing time.
Sorry for the mistake sir
I have created new Topic for my problem
really helpful