So as I am going through the content of the course, I am trying to figure out how it applies to real-world projects. By doing that, I am using VS Code and Cursor to implement some basic Python functionality in a localized project. It occurred to me as I am doing this that helper_functions is doing a lot for us under the covers, like setting up the AI client and the scope of the request to the AI chatbot. My question is where, if at all, does that information get presented to us on how to set that stuff up so that we might learn how to do this all outside of Jupyter notebook. Thank you kindly for any helpful responses.
Some of the courses have a reading page on how to set up your own environment.
Most of them do not, because there are a huge number of platforms and operating systems that any given student might want to use. It’s not straightforward or simple, and that leads to endless support questions that DLAI would have to provide support to answer.
Deploying information technology is not really their domain.
If you want to read the code for helper_functions, you can use the File → Open menu, and download the helper_functions.py file.
Thank you for the speedy reply. I appreciate it. I did open the helper_functions.py file, and that is what made me realize just how much under the covers was being done. I had guessed that there was some stuff, but there was more than I had expected. I dug a bit deeper and looked for the env variables and whatnot, but did not want anyone to think I was trying to hack into anything, so I went no further than finding the dockerenv file. I did not bother to open it. I was just curious if there was somewhere that had some instructions on it, so I could take a look at it.
Thank you
I had the exact same question when I was going through that course. The helper functions abstraction is really useful for learning concepts, but yeah, it hides all the actual setup work.
Here’s what helped me: I grabbed the raw SDK documentation OpenAI, Anthropic, whoever and just started building the same thing outside of the notebook. Took me a couple of hours to understand the flow, but once I got the API client working, error handling, and response parsing… it clicked.
The docs show you step by step how to initialize clients, manage API keys with the use environment variables, seriously, and handle requests properly. It’s not magic once you see it.
Real talk though when you move beyond toy projects and want to connect this to actual databases or company data, the setup gets complicated fast. That’s when I started looking at CustomGPT. Instead of managing client initialization, authentication, knowledge base scoping myself, it just works. Saves a ton of time if you’re building something people actually use.
But yeah, for learning? Definitely build it raw first. You need that foundation.
Hi @mwasmer,
As you’ll progress to module 4, all of this will be talked about there, and hopefully, some of your questions will be answered by those lectures.
Additionally, here’s something to get you started on how to connect your openAI api key into the functions of helper_utils and get it working:
from openai import AzureOpenAI, DefaultHttpxClient
client = AzureOpenAI(
api_key="abcdefg",
api_version="2024-02-01",
azure_endpoint = "https://cour-external-playground.openai.azure.com/",
http_client=DefaultHttpxClient(verify=False)
)
# +
# ### If you want to use your own OpenAI key, uncomment these cells below and comment out the other get_llm_response function cells:
# from openai import OpenAI
# ### Add your key as a string
# openai_api_key = "Add your key in here"
# # Set up the OpenAI client
# client = OpenAI(api_key=openai_api_key)
# def get_llm_response(prompt):
# """This function takes as input a prompt, which must be a string enclosed in quotation marks,
# and passes it to OpenAI's GPT3.5 model. The function then saves the response of the model as
# a string.
# """
# try:
# if not isinstance(prompt, str):
# raise ValueError("Input must be a string enclosed in quotes.")
# completion = client.chat.completions.create(
# model="gpt-3.5-turbo-0125",
# messages=[
# {
# "role": "system",
# "content": "You are a helpful but terse AI assistant who gets straight to the point.",
# },
# {"role": "user", "content": prompt},
# ],
# temperature=0.0,
# )
# response = completion.choices[0].message.content
# return response
# except TypeError as e:
# print("Error:", str(e))
# -
def get_llm_response(prompt):
"""This function takes as input a prompt, which must be a string enclosed in quotation marks,
and passes it to OpenAI's GPT3.5 model. The function then saves the response of the model as
a string.
"""
try:
if not isinstance(prompt, str):
raise ValueError("Input must be a string enclosed in quotes.")
completion = client.chat.completions.create(
model="gpt-35-turbo",
messages = [
{
"role": "system",
"content": "You are a helpful but terse AI assistant who gets straight to the point.",
},
{"role": "user", "content": prompt},
],
temperature=0.0,
)
response = completion.choices[0].message.content
return response
except TypeError as e:
print("Error:", str(e))
def print_llm_response(prompt):
"""This function takes as input a prompt, which must be a string enclosed in quotation marks,
and passes it to OpenAI's GPT3.5 model. The function then prints the response of the model.
"""
llm_response = get_llm_response(prompt)
print(llm_response)
The area you need to focus on is the code that has been commented out. Once you get your API key and follow the steps, the get_llm_response function will work for you.