Transfer Learning and DA on my own set of images - Course 4, Week 2

So, I want to use Data Augmentation on my own dataset, and I thought I could use the code we wrote in the PA of Week 2. For some reason, it does not work on my dataset. I get 9 images, all identical.

But, If I add training = True at the end, it works.

But in the original code from the programming assignment, we didn’t use training = True, why did it work there and not on my project?

My thought is that, when data_augmenter() is called in a .fit() function(training), then training becomes True by default and when it’s called outside a .fit() function, training becomes False, so we don’t augment our data during evaluation or prediction.

My question still remains tho, why did it work in the programming assignment without training=True? Is my last thought correct?

This is slightly interesting behavior in Keras.

A simple rule is, image pre-processing functions changes their behaviors based on a training mode. As you are aware of, if “training=False”, then those do not do anything, since it will be some bad impacts for inferences.

Then, if nothing is specified, how Keras interprets ?

Basically, those augmentation functions inherit Keras base_layer. And, it says;

Training mode for is set via (in order of priority):
(1) The training argument passed to this, if it is not None
(2) The training mode of an outer
(3) The default mode set by tf.keras.backend.set_learning_phase (if set)
(4) Any non-None default value for training specified in the call signature
(5) False (treating the layer as if it’s in inference)

I think your “” falls into the 2nd criteria.

Then, back to our assignment. At first we define a sequential API, data_augmenter(). Then, we call this API to see how it works. (As you pointed out, there is no training set parameter.)

data_augmentation = data_augmenter()
for image, _ in train_dataset.take(1):
    plt.figure(figsize=(10, 10))
    first_image = image[0]
    for i in range(9):
        ax = plt.subplot(3, 3, i + 1)
        augmented_image = data_augmentation(tf.expand_dims(first_image, 0))
        plt.imshow(augmented_image[0] / 255)

Then, you got 9 different images. So, a mode seems to be “training” mode.

Then, later, we will incorporate MobileNetV2 like below.

base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,

After this, back to a previous cell to check augmenter. Then, I believe you will see 9 identical images.
So, Keras’s condition seems to be different.

The basic rule to determine “training” mode is what I posted above. But, it looks like Keras’ criteria may be slightly ambiguous. (This is not a right term to use for computer, since computer programming itself can never be ambiguous. :slight_smile: ) This means, the result depends on how Keras determines.

As the rule may be changed in the future, and even current behavior is slightly unpredictable, it is better for us to explicitly set “training” mode which is the top priority criteria for Keras.

Hope this helps some.

1 Like

Oh okay, I see. Thank you for the detailed response!

@anon57530071 One last question, I thought that Data Augmentation increased my Dataset, here I don’t see it getting any bigger.
I just see 9 variations of the same image, but I don’t actually see them being saved on disk and used later.
For example, if I don’t use DA, with batch size of 16, total batches for each epoch is 1358. With DA on, I would suspect it will need more batches since I have more images. What am I missing here?

I thought that Data Augmentation increased my Dataset, here I don’t see it getting any bigger.

It’s not. Our augmentation program does not contribute to increase the number of samples, but works on the fly. I think this is good approach from data management view point. If we create fake images multiple times, data volume is getting large that may be hard to manage them. And, if there are some unexpected results after training, we may not be able to see whether it is caused by fake images or not quickly. We need some data cleansing operation for that.

I just see 9 variations of the same image, but I don’t actually see them being saved on disk and used later.

Again, it is not saved. It’s a visualization only to see how augmentation works.

See the model summary. This is an output by using latest Keras/Tensorflow, not our assignment level. But, I’m using this, since it provides good information.

“random_flip” and “random_rotation” are part of this network, and transform images on the fly. It does not increase the number of images, but transforms image randomly when data is passed. With increased number of epochs, eventually you can feed several different types of images into the network from a single image for training.

Hope this clarifies.

1 Like

Of course, thank you for your time, Nobu. You’ve been more than helpful!