Requirement.txt file for running the code in my own pc

Could anyone please help me find the requirement.txt to run the codes in my own pc?

1 Like

this is related to the assignment week you have selected? Are you asking for all the files related to the week 3 assignment??

Back when I took this Specialization, DeepLearning didn’t explicitly support learners attempting to run the exercises locally - too much possible variation. Those who wanted to do so were mostly on their own, though if you search these fora you can find some helpful, if somewhat stale, threads discussing it. In any case, it’s easy enough to create a dependency listing by adding code to one of the exercise notebooks…

$ pip freeze > requirements.txt

You should have an understanding of how to create and manage tailored environments using tools similar to conda before heading down this path.

ps: here’s a related thread that contains a link to another…

absl-py==1.0.0
alembic==0.9.9
appdirs==1.4.3
asn1crypto==0.24.0
astor==0.8.1
async-generator==1.10
attrs==18.1.0
backcall==0.1.0
backports.weakref==1.0.post1
beautifulsoup4==4.6.3
bleach==1.5.0
bokeh==0.12.16
branca==0.3.1
certifi==2018.8.24
cffi==1.11.5
chardet==3.0.4
cloudpickle==0.5.6
colorcet==3.0.0
constantly==15.1.0
cryptography==2.2.1
cycler==0.10.0
Cython==0.28.5
dask==0.19.2
decorator==4.3.0
dill==0.2.8.2
entrypoints==0.2.3
fastcache==1.0.2
gast==0.2.2
gmpy2==2.0.8
google-pasta==0.2.0
grpcio==1.44.0
h5py==2.10.0
html5lib==0.9999999
hyperlink==17.3.1
idna==2.7
imageio==2.6.1
incremental==17.5.0
ipydatawidgets==4.2.0
ipykernel==4.8.2
ipyleaflet==0.12.3
ipympl==0.8.8
ipython==6.5.0
ipython-genutils==0.2.0
ipywidgets==7.7.0
itk==5.0.1
itk-core==5.0.1
itk-filtering==5.0.1
itk-io==5.0.1
itk-meshtopolydata==0.5.1
itk-numerics==5.0.1
itk-registration==5.0.1
itk-segmentation==5.0.1
itkwidgets==0.26.1
jedi==0.12.1
Jinja2==2.10
joblib==1.1.0
jsonschema==2.6.0
jupyter-client==5.2.3
jupyter-core==4.4.0
jupyterhub==0.9.2
jupyterlab==0.34.0
jupyterlab-launcher==0.13.1
jupyterlab-widgets==1.1.0
Keras==2.3.1
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.2
kiwisolver==1.0.1
llvmlite==0.23.0
Mako==1.0.7
Markdown==2.6.11
matplotlib==3.1.2
mistune==0.8.3
nbconvert==5.3.1
nbformat==4.4.0
networkx==2.2
nibabel==2.5.0
notebook==5.6.0
numba==0.38.1
numexpr==2.6.6
numpy==1.18.1
olefile==0.46
opencv-python==4.1.2.30
opt-einsum==3.3.0
packaging==18.0
pamela==0.3.0
pandas==0.25.3
pandocfilters==1.4.2
param==1.12.1
parso==0.3.1
patsy==0.5.0
pexpect==4.6.0
pickleshare==0.7.4
Pillow==5.3.0
prometheus-client==0.3.0
prompt-toolkit==1.0.15
protobuf==3.19.4
ptyprocess==0.6.0
pyasn1==0.4.4
pyasn1-modules==0.2.1
pycosat==0.6.3
pycparser==2.18
pyct==0.4.8
Pygments==2.2.0
pyOpenSSL==18.0.0
pyparsing==2.2.2
PySocks==1.6.8
python-dateutil==2.7.3
python-editor==1.0.3
python-oauth2==1.0.1
pytz==2018.5
PyWavelets==1.0.1
PyYAML==3.13
pyzmq==17.1.2
requests==2.19.1
ruamel-yaml==0.15.44
scikit-image==0.14.1
scikit-learn==0.22.1
scipy==1.1.0
seaborn==0.9.0
Send2Trash==1.5.0
service-identity==17.0.0
simplegeneric==0.8.1
six==1.11.0
SQLAlchemy==1.2.12
statsmodels==0.9.0
sympy==1.1.1
tensorboard==1.15.0
tensorflow==1.15.0
tensorflow-estimator==1.15.1
tensorflow-tensorboard==1.5.1
termcolor==1.1.0
terminado==0.8.1
testpath==0.3.1
Theano==1.0.3
toolz==0.9.0
tornado==5.1
traitlets==4.3.2
traittypes==0.2.1
Twisted==18.7.0
urllib3==1.23
vincent==0.4.4
wcwidth==0.1.7
webencodings==0.5
Werkzeug==0.14.1
widgetsnbextension==3.6.0
wrapt==1.14.0
xarray==0.16.2
xlrd==1.1.0
zstandard==0.17.0

I tried pip freeze earlier. It has dozens of packages. I suppose most of them are not required for the week 3 project. Some of the packages are also difficult to find. I guess most of them work with Python 3.7 but quite a few of them still have issues with building the wheel

Hello Bemnet,

to run this week assignment in your local Jupyter notebook, one needs to download all the files related to the assignment.

Click Lab files, then download all files, then place that folder in the same folder where your Jupiter notebook looks up for the notebook to run all the cells correctly.

But I have a question, why are you trying to run coursera assignments in your local Jupyter notebook?

Regards
DP

So start with the most obvious, such as ensuring your tensorflow, python, numpy etc. are present at the compatible level. Then look at the explicit imports in the notebook. See what still breaks, add those.

Also, @Deepti_Prasad has it right that you also want any helper files .py and data in the correct directory structure replicated locally.

Many of the exercises run well locally and it can be useful to have them running there. For example, if you want to experiment with network architectures or hyperparameters or keep access to a running version after you complete the Specialization(s). It also forces one to learn how to deal with the flux of the distributed open source composite application model, where version dependencies and (in)compatibility always creep in sooner or later.

Some of the exercises, especially in the Medicine Specialization, have some rather large data sets that I found problematic to download and host locally. Maybe another network-based solution that offers virtual storage would be useful.

1 Like

Hello, I have already completed the assignment and downloaded the necessary files to my local machine. I was hoping to train the model on my machine with a large dataset, as suggested in week 3, assignment point 4.1. However, setting up the environment was difficult.

I also want to experiment with the code a lot in my PC because that helps me to learn a lot.

1 Like

Thanks for the suggestion. I am trying to do that, but it is taking a lot of time just to setup the environment. I have the dataset in my pc and all the helper files in the same folder.

1 Like

In that case, are you telling me you are using dataset other than what this notebook assignment used? Can I know then where you are getting stuck and if your jupyter notebook file path for root directory follows the same path where your notebook and files are there?

Thanks for asking. I’m using a larger Decathlon brain tumor dataset and want to test the week3 method with the BRATS 2020, 2019, and 2018 datasets. But I’m having trouble installing some packages despite using requirements.txt from pip freeze.

This is the screen shot for MarkupSafe==1.0. Other packages also had the same issue.

1 Like

I also tried pip install --use-pep517 MarkupSafe==1.0

It looks like you are using Anaconda. I ran into trouble mixing some packages installed by Anaconda or conda but other packages installed by pip. Have you tried letting Anaconda help you find and install compatible versions of the components you want / need ?

From the Anaconda pages …

“ You can download other packages using the pip install command that is installed with Anaconda. Pip packages provide many of the features of conda packages and in some cases they can work together. However, the preference should be to install the conda package if it is available.

My emphasis added above

this screenshot clearly mentions the issue is not with pip but with the package metadata. installing with pip install will not resolve issue. One need to know if your added larger dataset is compatible with the test files of the notebook assignment, otherwise you will keep getting metadata error.

Yes you are right, the issue is not with the pip here but somewhere before this it recommended me to use " pip install --use-pep517"

thank you for the suggestion, I will try to create a new environment with Conda. But most of the time older packages are difficult to find using Conda

can I know some of the details of the added dataset does it follow the same feature of the assignment test and utils data file? Did you verify this part?

like in metadata it explains about the assignment notebook which we do, so adding your own data to the same notebook only will not be enough but to make changes in the metadata files

Sure, I’m using a larger Decathlon brain tumor dataset which is the same dataset that the assignment uses and want to test the week 3 method with the BRATS 2020, 2019, and 2018 datasets. The MICCAI BraTS 2018, 2019, 2020 dataset is a multimodal MRI dataset. It has a) native (T1) and b) post-contrast T1-weighted (T1Gd), c) T2-weighted (T2), and d) T2 Fluid Attenuated Inversion Recovery (FLAIR) volumes. I think the decathlon dataset only has the T1 native MRI dataset. I must edit some things before, I use the code directly.

Yes now you got the gist.
I will also have a look at the metadata, but cannot promise when I can get back to you, as I am busy in some work.