Tips to work on a local environment

While volunteering as a mentor, I had seen many learners want to use their local environment for assignments. As I’m leaving this community, I’m summarizing some tips to make them run on your local environment as a reference.

  1. Introductions

Basically, you are better to have the same version of packages as the Coursera platform with checking by “pip list” if you do not want to spend extra time for problem determinations. Majority of assignments use Tensorflow 2.3, and one assignment uses Tensorflow 2.4, which is W4A1 Transformer in the Course 5, Sequence Models, simply because MultiHeadAttention is not supported by Tensorflow 2.3.

So, if you want to create a single conda environment for all assignments, then, use Tensorflow 2.4, and Python 3.7. (One assignment uses model_from_jason which has dependencies on Python version. It does not work 3.8 or newer version.)

There are many packages that you need to specify older version to install. But, this is not a big challenge, since you just need to refer the Coursera Platform.

But, I believe what you want is to use the latest Tensorflow, not a very old one. :slight_smile:

The challenge is to make them run on the latest version of Tensorflow, especially for Mac users who can only work with Tensorflow 2.5 or newer. Here is the tip. My main version of tensorflow is tensorflow-macos 2.9.2. I’m not using M1, but using Intel machine + AMD GPU/Razer Core X.

  1. Neural Networks and Deep Learning
  2. Improving Deep Neural Networks
  3. Structuring Machine Learning Projects

All assignments work without any attentions.

  1. Convolutional Neural Networks.

> W1A1 - Convolutional Neural Networks: Step by Step

This should work with no modifications.

> W1A2 - Convolutional Neural Networks: Application

You may need to install some additional packages like scipy and pandas, but it is a normal package installation process. There will be no challenges.

> W2A1 - Residual Networks

The behavior of “glorot_uniform” is different by tensorflow versions. So, you will most likely get an assertion error at Exercise 2. To run successfully, in the first cell to load packages, you need to remove glorot_uniform at first, then add this.

from tensorflow.python.keras.initializers.initializers_v2 import GlorotUniform as glorot_uniform

> W2A2 - Transfer Learning with MobileNetV2

Mac users may see a GPU related error to display 9 variation of the cute animal. In that case, try this.

with tf.device('/CPU:0'):
    data_augmentation = data_augmenter()

for image, _ in train_dataset.take(1):
    plt.figure(figsize=(10, 10))

The annoying thing is, the name of layers are different by Tensorflow versions. In this exercise, it is in the test cell. So, easy to modify like this.

`        ['TensorFlowOpLayer', [(None, 160, 160, 3)], 0],`
v 2.9
`        ['TFOpLambda', (None, 160, 160, 3), 0],`

> W3A1 - Autonomous Driving - Car Detection
> W3A2 - Image Segmentation with U-Net

Both should run with no modifications.

> W4A1 - Face Recognition

loaded_model_json =

If you get “bad marshal data (unknown type code)” with this, then, you need to use different version of Python. In this case, Python 3.7 should work. (I’m actually using 3.8, but also using re-saved version of weights that I created in the 3.7 environment. But, using Python 3.7 is straightforward.)

You will also have a chance to see the following error at a first unit test.

AttributeError: module 'tensorflow' has no attribute 'python'

This is caused by tensorlow, of course. Newer version of tensor flow cuts off the reference to python once it is started. So, we need to declare it beforehand, and also change a unit test as well.

At a package import, you need to add one line of code.

from tensorflow.python.framework.ops import EagerTensor

And, a unit test needs to be changed as follows.

#assert type(loss) == tf.python.framework.ops.EagerTensor, "Use tensorflow functions"
assert type(loss) == EagerTensor, "Use tensorflow functions"

You may also encounter assertion errors at verify(). This is caused by a difference of a version of pillow for PIL. Please downgrade “pillow” to 7.1.2.

> W4A2 - Deep Learning & Art: Neural Style Transfer

This should be OK with no modifications.

  1. Sequence Models.

> W1A1 - Building your Recurrent Neural Network - Step by Step
> W1A2 - Character level language model - Dinosaurus Island

Those should work as is.

> W1A3 - Improvise a Jazz Solo with an LSTM Network

At first, you need to use an older version of “music21”, i.e, 6.5, like pip install music21==6.5.
Annoying thing is a layer name change. ['TensorFlowOpLayer', [(None, 90)], 0] is now ['SlicingOpLambda', (None, 90), 0]. (Parameter is also slightly different…)
So, I modified for 2.9. But, just comparing a summary by your eyes may work. :slight_smile:

> W2A1 - Operations on Word Vector
> W2A2 - Emojify
> W3A1 - Neural Machine Translation

Those should work with no modifications.

> W3A2 - Trigger Word Detection

This assignment loads “wav” files for activates, negatives, and backgrounds. It gets the list of files to use by os.listdir like os.listdir('./raw_data/backgrounds'), but order of a list is unknown.
On the other hand, some unit tests “hard-cod” the index which is based on the order in the Coursera Platform. So, some tests may fail due to this.
In that case, the following small code may help. This is to create the exact same list as the Coursera Platform. (Of course, this may be changed in the future, based on the upgrade in the platform side.)

# re-order activates and negatives to be equivalent to the Coursera Platform

activate_list = [916,1579,731,909,2392,655,721,1741,725,668]
negative_list = [552,358,579,407,541,355,600,360,655,1337]

activates_reord = [0]*len(activates)
for activate in activates:
    activates_reord[activate_list.index(len(activate))] = activate
activates = activates_reord  
print("activate[0] len may be around 1000, since an `activate` audio clip is usually around 1 second (but varies a lot) \n" + str(len(activates[0])),"\n")

negatives_reord = [0]*len(negatives)
for negative in negatives:
    negatives_reord[negative_list.index(len(negative))] = negative
negatives = negatives_reord  

backgrounds[0],backgrounds[1] = backgrounds[1],backgrounds[0]

> W4A1 - Transformer

The output of MHA is slightly different from v2.4, which causes an assertion error.
If you see this error, it may be OK if the output looks similar. Or, you may want to change tolerances slightly from the default value (rtol=1e-5, atol=1e-8) to (rtol=1e-5, atol=1e-7). That level of difference.


> W4A3_UGL - Transformer Network Application: Question Answering

2.1 - TensorFlow implementation

train_ds.set_format(type='tf', columns=columns_to_return)
train_features = {x: train_ds[x].to_tensor(default_value=0, shape=[None, tokenizer.model_max_length]) for x in ['input_ids', 'attention_mask']}

When you run this cell, you may receive an error message like;
'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'to_tensor'

The code itself is slightly odd, since data set itself is already Tensor by a previous line.
This is, I think, originally prepared for Ragged Tensor, which has entries with different lengths to be converted to the desired “fixed” length. In this sense, the remaining role of this code is to transform the length of rows in the data set equals to tokenizer.model_max_length.

Then, if we look at both ‘input_ids’ and ‘attention_mask’, those are as follows.

<tf.Tensor: shape=(1000, 26), dtype=int64, numpy=
array([[ 101, 1996, 2436, ..., 3829, 1029,  102],
       [ 101, 1996, 3829, ..., 1997, 1029,  102],
       [ 101, 1996, 3871, ..., 3871, 1029,  102],
       [ 101, 1996, 6797, ..., 5010, 1029,  102],
       [ 101, 1996, 6797, ..., 5723, 1029,  102],
       [ 101, 1996, 3829, ..., 1997, 1029,  102]])>
<tf.Tensor: shape=(1000, 26), dtype=int64, numpy=
array([[1, 1, 1, ..., 1, 1, 1],
       [1, 1, 1, ..., 1, 1, 1],
       [1, 1, 1, ..., 1, 1, 1],
       [1, 1, 1, ..., 1, 1, 1],
       [1, 1, 1, ..., 1, 1, 1],
       [1, 1, 1, ..., 1, 1, 1]])>

Both are fixed length data set, and its length is 26. (And, you can also confirm that both are already Tensor.)
What we should do is to simply fill “zero” until tokenizer.model_max_length.

Here is an example.

#train_features = {x: train_ds[x].to_tensor(default_value=0, shape=[None, tokenizer.model_max_length]) for x in ['input_ids', 'attention_mask']}
padding = tf.constant([[0,0],[0,tokenizer.model_max_length-train_ds['input_ids'].shape[1]]])
train_features = {x: tf.pad(train_ds[x], padding) for x in ['input_ids', 'attention_mask']}

With this, we can create the same train_features as an original code in the Coursera Platform.

On the other hand, there is a big question why do we need “paddings” ? Usually, a padding is done inside Transformer. So, a simple transformation without additional padding may work.

train_features = {x: train_ds[x] for x in ['input_ids', 'attention_mask']}

With a quick test, it seems to be OK. But as it is not a full test, it is better to use the first try for our further testing.

That’s all. Of course, if there is any updates in components/assignments, then, the action may be different. But the above is what I have as of August 2022.

Hope this helps.

Good by community !


Hi anon57530071,

A belated thank you for this useful post!

interesting pointers, I came across while searching for a solution to my error. Thank you for your post.