Program Assignments in Local Environments

Once we completed the course, we need to download the program assignments to our local environments. So that we can continue to review them later. Would appreciate if you could help with the following:

(i) Some of the programming assignments have updates. My previous programs do not run in the current course environment. How to quickly find out where the updates are and make necessary changes?

(ii) I have Python 3.9.12 and the latest version of Tensor Flow 2.9.1 in my local environment. Can all the programming assignments for this Course run in this local environment? I do realize that the Tensor Flow version used in this course is an earlier version. If not, what changes do I need to make so that it will run in my local environment?

Thank you.

Cindy

There is no quick way to find out which assignments have been updated. You will need to open each one and that’s the point at which you find out about the updates. When you find one that’s updated, you need to handle each case appropriately. There are two kinds of updates: “forced” updates and “non-forced”. In the forced case, your current notebook is moved aside and given a name that interpolates the date and time and the new notebook is the one opened by the “Work in Browser” link. In the non-forced case, you have to do the “Get a Fresh Copy” procedure documented on the DLS FAQ Thread to get the new version. See the other topic on the FAQ Thread titled “Help! All my work disappeared!” to find out how to deal with the forced update case.

You’re correct that the versions of TF, python and all the various packages and libraries are from the last major update in April of 2021. There are no official instructions for how to create your own environment with matching versions, but here is a thread that will get you started down that path.

Some of the assignments may work in a more current environment, but there is no guarantee. The world of TF and python packages is pretty dynamic and APIs do change their behavior in incompatible ways sometimes. If you don’t want to do the Anaconda method documented on the thread that I linked, then your alternative is just to debug every case in which things don’t work in the current environment. But then the next time you pull new versions of packages, you may well have to do that all over again.

Paul,

Thank you for the fast response. I did look into the thread you provided. I do use Anaconda at the moment.

I will let you know if I see any “dragons” in the deep water. Will need to learn how to slay the dragons. :slight_smile:

If you already know how to use Anaconda, you should be in good shape! Then it’s just a question of following the instructions for extracting the version number information.

I have not personally tried to run the notebooks locally, so I can’t directly help. If you learn any good tips for how to do this, please let us know. You could add your info to that other thread or create a new one to help anyone else who wants to go this route. Thanks!

Cheers,
Paul

Paul,

Thank you for guiding me to the great resources on how to run the programming assignments on local environment.

It seems that there will be challenges. I may need to come back and ask more help.

Will be happy to share whatever I learn in this process.

Regards,

Cindy

Help appreciated.
Course 4 Week 2 Assignment 2
Coursera Environment: Python 3.7.0 TF 2.3.0 Numpy 1.18.4 Scipy 1.4.1. Programs run.
My Anaconda Environment: Python 3.9.12 TF 2.9.1 Numpy 1.23.5 Scipy 1.7.3. Found 2 errors test

Error 1:
from test_utils import summary, comparator

alpaca_summary = [[‘InputLayer’, [(None, 160, 160, 3)], 0],
[‘Sequential’, (None, 160, 160, 3), 0],
[‘TensorFlowOpLayer’, [(None, 160, 160, 3)], 0],
[‘TensorFlowOpLayer’, [(None, 160, 160, 3)], 0],
[‘Functional’, (None, 5, 5, 1280), 2257984],
[‘GlobalAveragePooling2D’, (None, 1280), 0],
[‘Dropout’, (None, 1280), 0, 0.2],
[‘Dense’, (None, 1), 1281, ‘linear’]] #linear is the default activation

comparator(summary(model2), alpaca_summary)

for layer in summary(model2):
print(layer)

Test failed
Expected value

[‘TensorFlowOpLayer’, [(None, 160, 160, 3)], 0]

does not match the input value:

[‘TFOpLambda’, (None, 160, 160, 3), 0]

AssertionError Traceback (most recent call last)
Input In [17], in <cell line: 12>()
1 from test_utils import summary, comparator
3 alpaca_summary = [[‘InputLayer’, [(None, 160, 160, 3)], 0],
4 [‘Sequential’, (None, 160, 160, 3), 0],
5 [‘TensorFlowOpLayer’, [(None, 160, 160, 3)], 0],
(…)
9 [‘Dropout’, (None, 1280), 0, 0.2],
10 [‘Dense’, (None, 1), 1281, ‘linear’]] #linear is the default activation
—> 12 comparator(summary(model2), alpaca_summary)
14 for layer in summary(model2):
15 print(layer)

File ~\DeepLearningAI\Course4\Week2\W2A2\test_utils.py:23, in comparator(learner, instructor)
18 if tuple(a) != tuple(b):
19 print(colored(“Test failed”, attrs=[‘bold’]),
20 “\n Expected value \n\n”, colored(f"{b}“, “green”),
21 “\n\n does not match the input value: \n\n”,
22 colored(f”{a}", “red”))
—> 23 raise AssertionError(“Error in test”)
24 print(colored(“All tests passed!”, “green”))

AssertionError: Error in test

Help appreciated.
Course 4 Week 2 Assignment 2
Coursera Environment: Python 3.7.0 TF 2.3.0 Numpy 1.18.4 Scipy 1.4.1. Programs run.
My Anaconda Environment: Python 3.9.12 TF 2.9.1 Numpy 1.23.5 Scipy 1.7.3. Found 2 errors test

Error 2:

assert type(loss_function) == tf.python.keras.losses.BinaryCrossentropy, “Not the correct layer”


AttributeError Traceback (most recent call last)
Input In [26], in <cell line: 1>()
----> 1 assert type(loss_function) == tf.python.keras.losses.BinaryCrossentropy, “Not the correct layer”
2 assert loss_function.from_logits, “Use from_logits=True”
3 assert type(optimizer) == tf.keras.optimizers.Adam, “This is not an Adam optimizer”

AttributeError: module ‘tensorflow’ has no attribute ‘python’

It looks like they changed the way the classes inherit in TF 2.9.1 versus 2.3.0. Did you try just googling “tf binarycrossentropy” to get the current docs? Here’s what I find. It looks like the API is:

tf.keras.losses.BinaryCrossentropy

So that assertion is based on the old way it worked in 2.3.0 apparently. I checked and that test cell is not immutable, so you can just change that assertion to match the current API.

But stepping back to a higher level here, what is the point of using Anaconda if you’re just going to end up using the current version of everything? I thought the point of Anaconda was that you could create multiple environments, each of which has precisely the right versions of everything? So that you don’t have to debug each case like this.

Of course the other alternative is to debug each incompatible case you find, which is the path you seem to be taking.

If you look at the “expected” value, that TFOpLayer comes right after the Sequential layer that is the data augmentation step. So it must be preprocess_input. That is a Keras function that was imported earlier in the notebook. So they must have changed the way that functions works: it used to be implemented with OP layers and now they use Lambda functions. So you have two choices, as I alluded to in my previous reply:

You can either use the same TF 2.3.0 version that the course website uses or you’ll have to change the way this “comparator” test works to match the new way that the imported function is defined in TF 2.9.0. Note that this is just the unit test for that cell. Maybe your version actually works fine in terms of actually training and running the model, but the test is written to be compatible with the old TF. So what is your real goal here? You can’t submit to the grader from your local environment in any case. So what is it that you really want in the longer term? To be able to refer back to the course code and learn from it without continuing to pay the monthly fee? If that’s the goal, then you have the two choices I enumerated earlier:

  1. Figure out how to actually reproduce the course environment so that you can run the code “as is”.
  2. Fix every compatibility issue that you find, even if it means changing the given code, e.g. the unit tests that are failing here.

Of course in the case of option 2), there are some notebooks in which they purposely made the test cells immutable, so that students couldn’t make the mistake of thinking they needed to change the tests instead of fixing their code. That doesn’t seem to be the issue in this particular instance, but something to watch out for. Of course you can find a way to modify that “immutable” attribute of a cell, if the need really arises.

Paul,

Thank you so much!! This works!! :slight_smile:

Best,

Cindy

Paul,

Thanks for getting back.

This Deep Learning Course is compressive and great learning. Since I spent so much effort in this course, I would like to go back and review the materials, especially the programming assignments I have completed.

I would like to run it on my existing environment in Anaconda to make it work with the newer versions of the software I have already installed. Without having to keep track of multiple versions of virtual environments.

I will try setting up a virtual environment with the versions of software that is currently used in this Coursera course as well.

I will also try to run it on Google Colab, where I can actually run the programs with GPU.

Will have to figure out this comparator issues. The program runs when I am not using these assertions. However, the result is different.

Model 2 Summary
Model: “model_2”


Layer (type) Output Shape Param #

input_10 (InputLayer) [(None, 160, 160, 3)] 0

sequential_11 (Sequential) (None, 160, 160, 3) 0

tf.math.truediv_3 (TFOpLamb (None, 160, 160, 3) 0
da)

tf.math.subtract_3 (TFOpLam (None, 160, 160, 3) 0
bda)

mobilenetv2_1.00_160 (Funct (None, 5, 5, 1280) 2257984
ional)

global_average_pooling2d_5 (None, 1280) 0
(GlobalAveragePooling2D)

dropout_3 (Dropout) (None, 1280) 0

dense_2 (Dense) (None, 1) 1281

=================================================================
Total params: 2,259,265
Trainable params: 1,281
Non-trainable params: 2,257,984

alpaca_summary
[[‘InputLayer’, [(None, 160, 160, 3)], 0], [‘Sequential’, (None, 160, 160, 3), 0], [‘TensorFlowOpLayer’, [(None, 160, 160, 3)], 0], [‘TensorFlowOpLayer’, [(None, 160, 160, 3)], 0], [‘Functional’, (None, 5, 5, 1280), 2257984], [‘GlobalAveragePooling2D’, (None, 1280), 0], [‘Dropout’, (None, 1280), 0, 0.2], [‘Dense’, (None, 1), 1281, ‘linear’]]

Regards,

Cindy

Hi, Cindy.

That’s a great idea to have access to an online platform like Colab so you can bring more hardware power to bear. I don’t have much experience with Colab, but they do support Jupyter Notebooks as their primary UI. Instead of having a local file system in AWS (as on Coursera), you have to put any ancillary files in your Google Drive and then may have to change some pathnames and the like. I’m also not sure that they support Anaconda: you may well be stuck with the current versions of everything, so it probably is a good exercise to solve all the “versionitis” problems, as you are doing.

Note that I don’t think the output you show there is a problem. It’s just showing how the process_input function works. It’s just a division followed by a subtraction.

Cheers,
Paul

There is cost and benefit to both approaches. If you decide to port all the programming assignments to current software releases, you end up tinkering with code that doesn’t materially advance your understanding of machine learning just to get it to run in the new env. And maybe not match the ‘expected output’ which means you can’t always validate your changes.

I ended up using conda to build several virtual environments tailored for the NLP specialization, the AI for Medicine specialization, and the TensorFlow Advanced Techniques because they used different packages and some had specific levels they had to run at. Personally I found it much easier to build an env than to go through code I didn’t write and don’t want to maintain and migrate it.

ai_curious,

Thank you for the feedback. Those are good points.

Is it troublesome to have to maintain various virtual environments?
Does the code run well within the virtual environment?
For example, if Coursera uses Google Colab or AWS, and we are running it in our local laptop in an Anaconda environment?

Thank you.

Cindy

Help appreciated.
Course 4 Week 4 Assignment 1
Coursera Environment: Python 3.7.0 TF 2.3.0 Numpy 1.18.4 Scipy 1.4.1. Programs run.
My Anaconda Environment: Python 3.9.12 TF 2.9.1 Numpy 1.23.5 Scipy 1.7.3.

from tensorflow.keras.models import model_from_json

json_file = open(‘keras-facenet-h5/model.json’, ‘r’)
loaded_model_json = json_file.read()
json_file.close()
model = model_from_json(loaded_model_json)
model.load_weights(‘keras-facenet-h5/model.h5’)

Error:

ValueError Traceback (most recent call last)
Input In [3], in <cell line: 6>()
4 loaded_model_json = json_file.read()
5 json_file.close()
----> 6 model = model_from_json(loaded_model_json)
7 model.load_weights(‘keras-facenet-h5/model.h5’)

File ~\anaconda3\lib\site-packages\keras\saving\model_config.py:102, in model_from_json(json_string, custom_objects)
82 “”“Parses a JSON model configuration string and returns a model instance.
83
84 Usage:
(…)
99 A Keras model instance (uncompiled).
100 “””
101 from keras.layers import deserialize_from_json # pylint: disable=g-import-not-at-top
→ 102 return deserialize_from_json(json_string, custom_objects=custom_objects)

File ~\anaconda3\lib\site-packages\keras\layers\serialization.py:226, in deserialize_from_json(json_string, custom_objects)
221 populate_deserializable_objects()
222 config = json_utils.decode_and_deserialize(
223 json_string,
224 module_objects=LOCAL.ALL_OBJECTS,
225 custom_objects=custom_objects)
→ 226 return deserialize(config, custom_objects)

File ~\anaconda3\lib\site-packages\keras\layers\serialization.py:205, in deserialize(config, custom_objects)
168 “”“Instantiates a layer from a config dictionary.
169
170 Args:
(…)
202 ```
203 “””
204 populate_deserializable_objects()
→ 205 return generic_utils.deserialize_keras_object(
206 config,
207 module_objects=LOCAL.ALL_OBJECTS,
208 custom_objects=custom_objects,
209 printable_module_name=‘layer’)

File ~\anaconda3\lib\site-packages\keras\utils\generic_utils.py:679, in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
676 custom_objects = custom_objects or {}
678 if ‘custom_objects’ in arg_spec.args:
→ 679 deserialized_obj = cls.from_config(
680 cls_config,
681 custom_objects=dict(
682 list(_GLOBAL_CUSTOM_OBJECTS.items()) +
683 list(custom_objects.items())))
684 else:
685 with CustomObjectScope(custom_objects):

File ~\anaconda3\lib\site-packages\keras\engine\training.py:2720, in Model.from_config(cls, config, custom_objects)
2716 functional_model_keys = [
2717 ‘name’, ‘layers’, ‘input_layers’, ‘output_layers’
2718 ]
2719 if all(key in config for key in functional_model_keys):
→ 2720 inputs, outputs, layers = functional.reconstruct_from_config(
2721 config, custom_objects)
2722 model = cls(inputs=inputs, outputs=outputs, name=config.get(‘name’))
2723 functional.connect_ancillary_layers(model, layers)

File ~\anaconda3\lib\site-packages\keras\engine\functional.py:1300, in reconstruct_from_config(config, custom_objects, created_layers)
1298 # First, we create all layers and enqueue nodes to be processed
1299 for layer_data in config[‘layers’]:
→ 1300 process_layer(layer_data)
1301 # Then we process nodes in order of layer depth.
1302 # Nodes that cannot yet be processed (if the inbound node
1303 # does not yet exist) are re-enqueued, and the process
1304 # is repeated until all nodes are processed.
1305 while unprocessed_nodes:

File ~\anaconda3\lib\site-packages\keras\engine\functional.py:1282, in reconstruct_from_config..process_layer(layer_data)
1278 else:
1279 # Instantiate layer.
1280 from keras.layers import deserialize as deserialize_layer # pylint: disable=g-import-not-at-top
→ 1282 layer = deserialize_layer(layer_data, custom_objects=custom_objects)
1283 created_layers[layer_name] = layer
1285 node_count_by_layer[layer] = int(_should_skip_first_node(layer))

File ~\anaconda3\lib\site-packages\keras\layers\serialization.py:205, in deserialize(config, custom_objects)
168 “”“Instantiates a layer from a config dictionary.
169
170 Args:
(…)
202 ```
203 “””
204 populate_deserializable_objects()
→ 205 return generic_utils.deserialize_keras_object(
206 config,
207 module_objects=LOCAL.ALL_OBJECTS,
208 custom_objects=custom_objects,
209 printable_module_name=‘layer’)

File ~\anaconda3\lib\site-packages\keras\utils\generic_utils.py:679, in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
676 custom_objects = custom_objects or {}
678 if ‘custom_objects’ in arg_spec.args:
→ 679 deserialized_obj = cls.from_config(
680 cls_config,
681 custom_objects=dict(
682 list(_GLOBAL_CUSTOM_OBJECTS.items()) +
683 list(custom_objects.items())))
684 else:
685 with CustomObjectScope(custom_objects):

File ~\anaconda3\lib\site-packages\keras\layers\core\lambda_layer.py:303, in Lambda.from_config(cls, config, custom_objects)
300 @classmethod
301 def from_config(cls, config, custom_objects=None):
302 config = config.copy()
→ 303 function = cls._parse_function_from_config(config, custom_objects,
304 ‘function’, ‘module’,
305 ‘function_type’)
307 output_shape = cls._parse_function_from_config(config, custom_objects,
308 ‘output_shape’,
309 ‘output_shape_module’,
310 ‘output_shape_type’)
311 if ‘mask’ in config:

File ~\anaconda3\lib\site-packages\keras\layers\core\lambda_layer.py:358, in Lambda._parse_function_from_config(cls, config, custom_objects, func_attr_name, module_attr_name, func_type_attr_name)
352 function = generic_utils.deserialize_keras_object(
353 config[func_attr_name],
354 custom_objects=custom_objects,
355 printable_module_name=‘function in Lambda layer’)
356 elif function_type == ‘lambda’:
357 # Unsafe deserialization from bytecode
→ 358 function = generic_utils.func_load(config[func_attr_name], globs=globs)
359 elif function_type == ‘raw’:
360 function = config[func_attr_name]

File ~\anaconda3\lib\site-packages\keras\utils\generic_utils.py:793, in func_load(code, defaults, closure, globs)
791 except (UnicodeEncodeError, binascii.Error):
792 raw_code = code.encode(‘raw_unicode_escape’)
→ 793 code = marshal.loads(raw_code)
794 if globs is None:
795 globs = globals()

ValueError: bad marshal data (unknown type code)

Paul & ai_curious,

Thank you both so much for pointing me to tips on creating a virtual environment for the programming assignments.

I have successfully created a conda virtual environment where most programs run from Course 1 to Course 4.

For Course 5, the sequence models, and transformers, I plan to create a separate virtual environment for each. Sequence models uses modules for sound and words, such as pyaudio, pydub, pygame, mido etc. Conda was not able to install some of the required modules. So far, I have not been able to get most of these programs running in the conda virtual environment yet.
Would appreciate any tips on what specific ‘sound’ or ‘language’ packages might be useful to set up the correct conda virtual environment for these programs to run correctly.

Thank you.

Cindy

1 Like

What OS are you running locally? What errors is conda giving when you try to find / install the required packages?

ai_curious,

Thank you for getting back.

The programs are run in windows 11. Will write down errors and post.

Thank you.

Cindy

ai_curious

For some modules, when I install the version matches that of the programming assignments, the message I got from conda is package not found. So, I just installed whatever conda has.
conda config --add channels conda-forge
conda install emoji=1.2.0 (package not found)
conda install emoji (2.2.0 installed)

  • emoji 1.2.0 required (2.2.0 installed)
  • pyaudio 0.2.12 required (0.2.11 installed)
  • mido 1.2.9 required (1.2.10 installed)
  • music21 6.5.0 required (7.3.3 installed)
  • pydub 0.24.0 required (0.25.1 installed)

Thank you.

Cindy

ai_curious

For py_game

conda install pygame=2.1.2

(Package not found)

conda install -c cogsci pygame=2.1.2

Error message in red below

Found conflicts! Looking for incompatible packages

Unsatisfiederror : the following specifications were found to be incompatible with the existing python installation in your environment

Specifications:

-pygame → python[version=’3.10.|3.9.|3.11.|3.8.]

Pygame → python[version=>=3.10, <3.11.0a0|>=3.9, <3.10.0a0|>=3.11, <3.12.0a0|>3.8<3.9.0a0]

Your python: python=3.7.6

If python is on the left-most side of the chain, that’s the version you’ve asked for.

When python appears to the right, that indicates that the thing on the left is somehow not available for the python version you are constrained to. Note that conda will not change your python version to a different minor version unless you explicitly specify that.

So, I finally use

pip install pygame==2.1.2

(Which was successfully installed)

However, this may not be installed within the virtual environment.

So, I am looking into using pip install within my virtual environment.

Thank you

Cindy