How to Use Jupyter Notebook ChatGPT Prompt Engineering Course Demo's are Using?

As I watch the tutorials the code is being typed in what I think is the Jupyter notebook. Is Jupyter, something I should use or load on my PC or is it just for this course tutorial? If I should use it. How/where to I get it?

Hi welcome to the forum! The Jupyter notebooks are built into the course environment, so no need to install Jupyter separately, as you can run all the code you need in the browser. That said, if you wanted to use Jupyter Notebooks locally, here is the official website where you can find the download/install instructions:

Project Jupyter | Home

Hope that helps!

Thank you for your response. If I understand correctly, then I should be able to run the snippets of code shown using the Jupyter notebook inside the course, “ChatGPT - Prompt Engineering for Developers”, However, ,when I select RUN in the Jupyter notebook for the code the instructor uses in " Tactic 1: Use delimiters to clearly indicate distinct parts of the input" , I get the following response:
“NameError Traceback (most recent call last)
Cell In[3], line 18
1 text = f”“”
2 You should express what you want a model to do by \
3 providing instructions that are as clear and \
(…)
11 more detailed and relevant outputs.
12 “”"
13 prompt = f"“”
14 Summarize the text delimited by triple backticks \
15 into a single sentence.
16 {text}
17 “”"
—> 18 response = get_completion(prompt)
19 print(response)

Cell In[1], line 3, in get_completion(prompt, model)
1 def get_completion(prompt, model=“gpt-3.5-turbo”):
2 messages = [{“role”: “user”, “content”: prompt}]
----> 3 response = openai.ChatCompletion.create(
4 model=model,
5 messages=messages,
6 temperature=0, # this is the degree of randomness of the model’s output
7 )
8 return response.choices[0].message[“content”]

NameError: name ‘openai’ is not defined

Hi canclinijg,

Thanks for the question. If you’re running the notebooks online, check to see if you’re first running the first cell which has this line:

import openai

Hope that helps, but let me know if you’re still having issues.

Thank you for your patient assistant. I am a a neophyte to this field. I am using a local instance of Notebooks. I do start with import openai. I also am using the version 1.0.0 get_completion code annotated in the course shown below:

client = openai.OpenAI()

def get_completion(prompt, model="gpt-3.5-turbo"):
    messages = [{"role": "user", "content": prompt}]
    response = client.chat.completions.create(
        model=model,
        messages=messages,
        temperature=0
    )
    return response.choices[0].message.content

I run the get_completion code above first with no errors... But then if I try to run the Tactic 1 code from courses Guidelines lesson (shown below), I get long list of errors shown below... It appears it doesn't like the backslash '\'  syntax and its not liking one of the files for multiple reasons, and ends by telling me I "have exceeded my quota"  Thanks in advance

I Run this

text = f"""
You should express what you want a model to do by \ 
providing instructions that are as clear and \ 
specific as you can possibly make them. \ 
This will guide the model towards the desired output, \ 
and reduce the chances of receiving irrelevant \ 
or incorrect responses. Don't confuse writing a \ 
clear prompt with writing a short prompt. \ 
In many cases, longer prompts provide more clarity \ 
and context for the model, which can lead to \ 
more detailed and relevant outputs.
"""
prompt = f"""
Summarize the text delimited by triple backticks \ 
into a single sentence.
```{text}```
"""
response = get_completion(prompt)
print(response)

Get these errors

<>:12: SyntaxWarning: invalid escape sequence '\ '
<>:17: SyntaxWarning: invalid escape sequence '\ '
<>:12: SyntaxWarning: invalid escape sequence '\ '
<>:17: SyntaxWarning: invalid escape sequence '\ '
C:\Users\jeffc\AppData\Local\Temp\ipykernel_14476\2496532566.py:12: SyntaxWarning: invalid escape sequence '\ '
  """
C:\Users\jeffc\AppData\Local\Temp\ipykernel_14476\2496532566.py:17: SyntaxWarning: invalid escape sequence '\ '
  """
---------------------------------------------------------------------------
RateLimitError                            Traceback (most recent call last)
Cell In[9], line 18
      1 text = f"""
      2 You should express what you want a model to do by \ 
      3 providing instructions that are as clear and \ 
   (...)
     11 more detailed and relevant outputs.
     12 """
     13 prompt = f"""
     14 Summarize the text delimited by triple backticks \ 
     15 into a single sentence.
     16 ```{text}```
     17 """
---> 18 response = get_completion(prompt)
     19 print(response)

Cell In[5], line 5, in get_completion(prompt, model)
      3 def get_completion(prompt, model="gpt-3.5-turbo"):
      4     messages = [{"role": "user", "content": prompt}]
----> 5     response = client.chat.completions.create(
      6         model=model,
      7         messages=messages,
      8         temperature=0
      9     )
     10     return response.choices[0].message.content

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\openai\_utils\_utils.py:271, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
    269             msg = f"Missing required argument: {quote(missing[0])}"
    270     raise TypeError(msg)
--> 271 return func(*args, **kwargs)

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\openai\resources\chat\completions.py:648, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
    599 @required_args(["messages", "model"], ["messages", "model", "stream"])
    600 def create(
    601     self,
   (...)
    646     timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
    647 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
--> 648     return self._post(
    649         "/chat/completions",
    650         body=maybe_transform(
    651             {
    652                 "messages": messages,
    653                 "model": model,
    654                 "frequency_penalty": frequency_penalty,
    655                 "function_call": function_call,
    656                 "functions": functions,
    657                 "logit_bias": logit_bias,
    658                 "logprobs": logprobs,
    659                 "max_tokens": max_tokens,
    660                 "n": n,
    661                 "presence_penalty": presence_penalty,
    662                 "response_format": response_format,
    663                 "seed": seed,
    664                 "stop": stop,
    665                 "stream": stream,
    666                 "temperature": temperature,
    667                 "tool_choice": tool_choice,
    668                 "tools": tools,
    669                 "top_logprobs": top_logprobs,
    670                 "top_p": top_p,
    671                 "user": user,
    672             },
    673             completion_create_params.CompletionCreateParams,
    674         ),
    675         options=make_request_options(
    676             extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
    677         ),
    678         cast_to=ChatCompletion,
    679         stream=stream or False,
    680         stream_cls=Stream[ChatCompletionChunk],
    681     )

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\openai\_base_client.py:1167, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
   1153 def post(
   1154     self,
   1155     path: str,
   (...)
   1162     stream_cls: type[_StreamT] | None = None,
   1163 ) -> ResponseT | _StreamT:
   1164     opts = FinalRequestOptions.construct(
   1165         method="post", url=path, json_data=body, files=to_httpx_files(files), **options
   1166     )
-> 1167     return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\openai\_base_client.py:856, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
    847 def request(
    848     self,
    849     cast_to: Type[ResponseT],
   (...)
    854     stream_cls: type[_StreamT] | None = None,
    855 ) -> ResponseT | _StreamT:
--> 856     return self._request(
    857         cast_to=cast_to,
    858         options=options,
    859         stream=stream,
    860         stream_cls=stream_cls,
    861         remaining_retries=remaining_retries,
    862     )

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\openai\_base_client.py:932, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
    930 if retries > 0 and self._should_retry(err.response):
    931     err.response.close()
--> 932     return self._retry_request(
    933         options,
    934         cast_to,
    935         retries,
    936         err.response.headers,
    937         stream=stream,
    938         stream_cls=stream_cls,
    939     )
    941 # If the response is streamed then we need to explicitly read the response
    942 # to completion before attempting to access the response text.
    943 if not err.response.is_closed:

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\openai\_base_client.py:980, in SyncAPIClient._retry_request(self, options, cast_to, remaining_retries, response_headers, stream, stream_cls)
    976 # In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a
    977 # different thread if necessary.
    978 time.sleep(timeout)
--> 980 return self._request(
    981     options=options,
    982     cast_to=cast_to,
    983     remaining_retries=remaining,
    984     stream=stream,
    985     stream_cls=stream_cls,
    986 )

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\openai\_base_client.py:932, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
    930 if retries > 0 and self._should_retry(err.response):
    931     err.response.close()
--> 932     return self._retry_request(
    933         options,
    934         cast_to,
    935         retries,
    936         err.response.headers,
    937         stream=stream,
    938         stream_cls=stream_cls,
    939     )
    941 # If the response is streamed then we need to explicitly read the response
    942 # to completion before attempting to access the response text.
    943 if not err.response.is_closed:

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\openai\_base_client.py:980, in SyncAPIClient._retry_request(self, options, cast_to, remaining_retries, response_headers, stream, stream_cls)
    976 # In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a
    977 # different thread if necessary.
    978 time.sleep(timeout)
--> 980 return self._request(
    981     options=options,
    982     cast_to=cast_to,
    983     remaining_retries=remaining,
    984     stream=stream,
    985     stream_cls=stream_cls,
    986 )

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\openai\_base_client.py:947, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
    944         err.response.read()
    946     log.debug("Re-raising status error")
--> 947     raise self._make_status_error_from_response(err.response) from None
    949 return self._process_response(
    950     cast_to=cast_to,
    951     options=options,
   (...)
    954     stream_cls=stream_cls,
    955 )

RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

Hello @canclinijg , assuming that you have your own OpenAI account so that you can run the notebooks locally, it appears that you’re hitting your limit of calls to OpenAI.

I went back to my local setup for this course and realized that I’m getting the same error about ‘quota,’ so I’m researching on the OpenAI site (API section), specifically these two pages:

If I figure out my issue and think that it can shed light for you and others, I’ll post back here. Good luck to you.

Update: My problem did indeed turn out to be $. I’ve evidently been using up all the ‘free’ allotments over the past few months. Adding $10 today cleared up the issue.

Thanks for running that down. I’ll add some $ and hopefully it will work

I added $ and tried again and it code ran! But only after deleting 2 lines of the code copied from the course (see below). And I still received a syntax warning.

For a full understanding would appreciate your help understanding why to these 2 questions. Thanks again.

Question 1 Why did I need to delete the 2 lines for dotenv module to get code snippet below to run? (otherwise I received a ModuleNotFoundError )

import openai
import os

from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())

openai.api_key = os.getenv(‘OPENAI_API_KEY’)

Error below

ModuleNotFoundError Traceback (most recent call last)
Cell In[6], line 4
1 import openai
2 import os
----> 4 from dotenv import load_dotenv, find_dotenv
5 _ = load_dotenv(find_dotenv())
7 openai.api_key = os.getenv(‘OPENAI_API_KEY’)

ModuleNotFoundError: No module named ‘dotenv’

Question 2 - I successfully ran the prompt below from the course, but it gave me a syntax warning below

Course prompt
prompt = f"“”
Generate a list of three made-up book titles along \
with their authors and genres.
Provide them in JSON format with the following keys:
book_id, title, author, genre.
“”"
response = get_completion(prompt)
print(response)

GPT response

{
“book_id”: 1,
“title”: “The Midnight Garden”,
“author”: “Elena Rivers”,
“genre”: “Fantasy”
},
{
“book_id”: 2,
“title”: “Echoes of the Past”,
“author”: “Nathan Black”,
“genre”: “Mystery”
},
{
“book_id”: 3,
“title”: “Whispers in the Wind”,
“author”: “Samantha Reed”,
“genre”: “Romance”
}
]

<>:6: SyntaxWarning: invalid escape sequence '\ ’
<>:6: SyntaxWarning: invalid escape sequence '\ ’
C:\Users\jeffc\AppData\Local\Temp\ipykernel_12380\1713482535.py:6: SyntaxWarning: invalid escape sequence '\ ’
“”"

Sorry, but your questions are beyond my knowledge. I don’t see that particular prompt in this course’s [“ChatGPT Prompt Engineering”] notebooks. Which lesson in the series is this notebook based on?

The syntax warning message might be easily gotten rid of by simply deleting the backslash () from the end of this line:
Generate a list of three made-up book titles along \

But that’s only a guess.

1 Like

Yes all the code I showed above was copied from the “Iterative” lesson examples. Yes this issue is trying to run the code locally outside the course. It runs fine inside the courses jupyter notebook.

Getting rid of the \ eliminated the syntax error! Thanks. The module must be something specific to the course structure

I learned, “Python-dotenv reads key-value pairs from a .env file and can set them as environment variables. It helps in the development of applications following the 12-factor principles.” Guess I didn’t need to do that since I added my API key to my environmental variables per the courses into instructions.

1 Like

Oh, yes, of course… adding API keys to environment is best. Glad you got it sorted. :slight_smile:

How do you call you api key if it is locally stored on your text file from the same directory with the .ipynb file?