AI Python for Beginners: Error when building LLM prompts with variables

Hi All,
I’ve started the AI Python for Beginners, which is a great course.
The only problem is the error message when executing the tests in Jupyter Notebook.

ModuleNotFoundError Traceback (most recent call last)
Cell In[1], line 1
----> 1 from helper_functions import print_llm_response
2 print_llm_response(“What is the capital of France?”)

ModuleNotFoundError: No module named ‘helper_functions’

Has anybody run on a similar error message?

Thank you all.

Best Regards

Ottavio

Are you running this notebook locally?

Thanks for your response. That’s correct. Running locally in my machine.

You need those helper files found at File->Open and download them! Try to read them first to understand them.

2 Likes

Thanks for your response.
Could you please clarify those helper files? More than one? Download from where? What is the name of the file?

Your help is much appreciated. Thank you

Use the File tab and then the Open command.
This will give you a listing of all of the files.
You need to download all of them.

Thanks

Maybe try “Cell->Run All” from the notebook menu.
Please report back your results.

i think we have to download the files in the folder with helper_function.py and activate all the libraries mentioned

are u doing it locally on ur desktop or on the notebook in the website

To run the notebooks on the Short Course platform, you do not need to download any files.

1 Like

still not proceeding with the completed status

Hopefully a mentor for that course will be able to assist you soon.

1 Like

Another possible tip:

Be sure you’ve watched all of the video segments all the way through, and then advanced to the next video.

Hi, need anyone up for helping me out in regards of helper_functions and print_llm_responses.

Below are the steps I followed -

Step- 1. First tried to run the “from helper_functions import print_llm_responses”. Got the error messages "No module named ‘helper_functions’ “.
Step- 2. Then tried the Copilot search how to get out of this problem. As copilot returned with try " pip install helper_functions”. I tried this as well.
Step- 3. Then this error messages popped out —
" error: subprocess-exited-with-error

python setup.py egg_info did not run successfully.
exit code: 1

[15 lines of output]
The ‘sklearn’ PyPI package is deprecated, use ‘scikit-learn’
rather than ‘sklearn’ for pip commands.

Here is how to fix this error in the main use cases:

  • use ‘pip install scikit-learn’ rather than ‘pip install sklearn’
  • replace ‘sklearn’ by ‘scikit-learn’ in your pip requirements files
    (requirements.txt, setup.py, setup.cfg, Pipfile, etc …)
  • if the ‘sklearn’ package is used by one of your dependencies,
    it would be great if you take some time to track which package uses
    ‘sklearn’ instead of ‘scikit-learn’ and report it to their issue tracker
  • as a last resort, set the environment variable
    SKLEARN_ALLOW_DEPRECATED_SKLEARN_PACKAGE_INSTALL=True to avoid this error

More information is available at
GitHub - scikit-learn/sklearn-pypi-package
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

Encountered error while generating package metadata.

See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details."

Step- 4. Again no idea what this error messages said. Tried copilot again but never returned with something promising reply.

Step- 5. Then following the error message tried “pip install scikit-learn” as well. This action completed successfully. However, again then when I tried to “pip install helper_functions” . Same error messages again popped up.
step- 6. Seeing no way around for solution, I hopped into the community discussions of this course. First into “How to have helper_functions in python” which is locked by the way. There i read the discussions of Leyu_Shi, Rajankrsna, Mubsi.

Step- 7. As per there discussion there is already a helper_functions.py file in the lab workspace. Okay I found it also. Tried to download the file and kept the file in same directory in which every other jupyter lab file are present in my system.
Step- 8. Then again when i tried to run the script " from helper_functiions…" , I got this error now----

ModuleNotFoundError Traceback (most recent call last)
Cell In[4], line 1
----> 1 from helper_functions import print_llm_reponses

File ~\helper_functions.py:5
2 import os
4 from openai import OpenAI
----> 5 from dotenv import load_dotenv
7 import random
9 #Get the OpenAI API key from the .env file

ModuleNotFoundError: No module named ‘dotenv’

Step- 9. Then I fixed the problem by using this command as well -

Install the dotenv module

pip install python-dotenv

Step- 10. I again run the same script ,“from helper_functions import print_llm_reponses”. Now this error is generating ---- " ---------------------------------------------------------------------------
OpenAIError Traceback (most recent call last)
Cell In[9], line 1
----> 1 from helper_functions import print_llm_reponses

File ~\helper_functions.py:14
11 openai_api_key = os.getenv(‘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’)
13 # Set up the OpenAI client
—> 14 client = OpenAI(api_key=openai_api_key)
17 def print_llm_response(prompt):
18 “”“This function takes as input a prompt, which must be a string enclosed in quotation marks,
19 and passes it to OpenAI’s GPT3.5 model. The function then prints the response of the model.
20 “””

File ~\anaconda3\lib\site-packages\openai_client.py:105, in OpenAI.init(self, api_key, organization, project, base_url, timeout, max_retries, default_headers, default_query, http_client, _strict_response_validation)
103 api_key = os.environ.get(“OPENAI_API_KEY”)
104 if api_key is None:
→ 105 raise OpenAIError(
106 “The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable”
107 )
108 self.api_key = api_key
110 if organization is None:

OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
"

Step- 11. Now I got the OpenAI API key as well and inserted the key in the

import gradio as gr

import os

from openai import OpenAI
from dotenv import load_dotenv

import random

#Get the OpenAI API key from the .env file
load_dotenv(‘.env’, override=True)
openai_api_key = os.getenv(" Here I inserted the openAI API key") as well. This scripts is present in the helper_functons.py file.

Step- 12. Still not getting the appropriate result.

Step- 13. So these many are my course of action.

Step- 14 Now need help of some mentor for getting me out of this problem.

Thank you for reading this message

Again I am posting , due to this same issue I am not able do with the Lesson 10 as well.

In lesson 10, I have written " from helper_functions import * ".
And this error message is generating in return making my learnings partially-understood.

Error message— " OpenAIError Traceback (most recent call last)
Cell In[14], line 1
----> 1 from helper_functions import *

File ~\helper_functions.py:14
11 openai_api_key = os.getenv(‘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’)
13 # Set up the OpenAI client
—> 14 client = OpenAI(api_key=openai_api_key)
17 def print_llm_response(prompt):
18 “”“This function takes as input a prompt, which must be a string enclosed in quotation marks,
19 and passes it to OpenAI’s GPT3.5 model. The function then prints the response of the model.
20 “””

File ~\anaconda3\lib\site-packages\openai_client.py:105, in OpenAI.init(self, api_key, organization, project, base_url, timeout, max_retries, default_headers, default_query, http_client, _strict_response_validation)
103 api_key = os.environ.get(“OPENAI_API_KEY”)
104 if api_key is None:
→ 105 raise OpenAIError(
106 “The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable”
107 )
108 self.api_key = api_key
110 if organization is None:

OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable "

previously also i have mentioned about this same message in my just previous post.
Need help please

“I have tried more than 20 times to install and import OpenAI. Although the installation and importing are successful, when I attempt to write code and call OpenAI, I receive a ‘ModuleNotFoundError’ or sometimes a ‘Rate Limited’ error. Could you please help me fix this? Also, how can I create a helper function to handle these errors?” Same problem bro

Don’t know brother…I think some mentor from this course could only help us

You have to do the following steps to run the lesson locally:

  1. Install openai and dotenv modules.
!pip install openai
!pip install python-dotenv
  1. Download helper_functions.py into the directory you are running the local Jupyter notebook from.

  2. Create a .env file in the directory you are running the local Jupyter notebook from, containing the following line:

export OPENAI_API_KEY=<your own key that you have to obtain from OpenAI>
  1. Check that OPEN_AI_KEY will be set properly in the Jupyter notebook.
import os

load_dotenv('.env', override=True)
openai_api_key = os.getenv('OPENAI_API_KEY')
print(openai_api_key)

You are dealing with an error “Rate limit reached for requests”. You can find the error documentation for OpenAI’s Python API by visiting the official OpenAI API documentation at:

Error codes - OpenAI API

Hoping this could help you.

1 Like