Corrections needed for lab C3_W4_Lab_4_GradCam

The fourth lab in week 4 of “Advanced Computer Vision with Tensorflow” course titled “GradCam (Lab #4)” should edited in the ways that follow below.

First edit is minor. Some import statements need to be deleted. The following lines that contain unused imports that results in “ModuleNotFoundError: No module named ‘imgaug’” error when run:

import imgaug as ia
from imgaug import augmenters as iaa

Second minor edit is in final documentation block which states “If you scroll all the way down to see the outputs of the final conv layer, you’ll see that there are very few active features and these are mostly located in the face of the cat.” The assumption in this sentence is that the image under examination is that of a cat. But the activations = show_sample(idx=None) line for choosing a sample image in a preceding code block by default chooses a random image so may be that of a dog.

Third edit is important and without this change the lab does not work as intended. the following lines which freeze parameters in all VGG16 layers before the final VGG16 block needs to be changed:

  # freeze the earlier layers
  for layer in base_model.layers[:-4]:
      layer.trainable=False

They should be changed so that instead they freeze all VGG16 parameters and only train the final custom dense layer:

  # freeze the earlier layers
  for layer in base_model.layers:
      layer.trainable=False

If that change is not made, 3 epochs of training that is performed later in the lab is not sufficient to yield an accurate model.

For example, I got the following results from training with unedited code:

Epoch 1/3
582/582 ━━━━━━━━━━━━━━━━━━━━ 175s 266ms/step - accuracy: 0.5108 - loss: 0.9373 - val_accuracy: 0.5185 - val_loss: 0.6930
Epoch 2/3
582/582 ━━━━━━━━━━━━━━━━━━━━ 125s 204ms/step - accuracy: 0.5073 - loss: 0.6931 - val_accuracy: 0.5185 - val_loss: 0.6931
Epoch 3/3
582/582 ━━━━━━━━━━━━━━━━━━━━ 122s 203ms/step - accuracy: 0.4816 - loss: 0.6932 - val_accuracy: 0.4815 - val_loss: 0.6936
<keras.src.callbacks.history.History at 0x7f42e1117b10>

Also, class activation map generated with get_CAM() was blank. And intermediate activations didn’t seem to make much sense. And intermediate activation for final conv layer was blank.

Whereas if only the final dense layer is trained with edited code, the following output results and activation maps seem to make sense:

Epoch 1/3
582/582 ━━━━━━━━━━━━━━━━━━━━ 152s 229ms/step - accuracy: 0.7846 - loss: 0.5134 - val_accuracy: 0.8740 - val_loss: 0.3118
Epoch 2/3
582/582 ━━━━━━━━━━━━━━━━━━━━ 177s 209ms/step - accuracy: 0.8907 - loss: 0.2916 - val_accuracy: 0.8917 - val_loss: 0.2581
Epoch 3/3
582/582 ━━━━━━━━━━━━━━━━━━━━ 120s 197ms/step - accuracy: 0.9107 - loss: 0.2415 - val_accuracy: 0.9084 - val_loss: 0.2281
<keras.src.callbacks.history.History at 0x7ce7b0914f10>

@chris.favila

Can you check if these are necessary editing required for the lab assignment mentioned.

Regards
DP

1 Like

Hi, and thank you for reporting! We’ll look into this, and update the notebook accordingly.