Data augmentation using

I’m unable to clearly understand whether data augmentation “increases” the total number of images that are being fed into the network for training or simply transforms each image and then pass on the transformed images to the network for training. For example, I have a batch of 64 images, and I apply tf.image.random_brightness as my augmenter function. Does it increase the total number of images to 128 (original images + tranformed images) and then pass it on to the network? Or does it simply apply random_brightness to each image and pass on this transformed batch of 64 images to the network?

Hi @Harshit1097 ,

Please follow this link to the reference menu for that function.

Hi @Harshit1097 ,

Data augmentation is a tool to increase your dataset. Deep learning algorithms need a lot of data to learn, to be trained. Some times we don’t have a lot of data, so a way to add data is by augmenting it. In the case of images, if I have, say, 100 pictures of cats, I can convert that dataset to 500 hundred pictures of cats by rotating to left, to right, inverting, inclining. Those 4 augmentation types will produce 400 more images, and now I have 500 images to feed my model.

Does it make sense?


Thanks Juan. The confusion I have is regarding the use of I am aware of the fact that using ImageDataGenerator for data augmentation actually increases the number of images by applying the transformations we specify, however I’m unable to find whether does the same thing.

Hello @Harshit1097,

If we look at this example in the doc page that Kic has shared,


The output takes the same shape as the input (2, 2, 3), so it does NOT produce more number of samples in the output than the number of samples in the input. However, your model DOES see more different images.

For example, let’s say you have only 10 raw images, then you might use tf.image.random_brightness 100 times to generate 1000 different images for an epoch of training. Do you know how to do it with


1 Like

Thanks @rmwkwok. I understood this now.

You are welcome @Harshit1097!


Sorry to bring this up again but I need further clarity. To apply tf.image.random_brightness 100 times on the 10 raw images that I have, I thought the .repeat() function of would be used. But that doesn’t seem to work. Can you please guide through how I can apply my augment function again and again on the raw images?

Hello @Harshit1097 , I would like to do it the other way around.

X = np.random.rand(10, 28, 28, 1)

Given the above X which is 10 samples of one-channel 28x28 images, how did you use repeat and other methods to attempt to generate more samples, albeit failed?

Please feel free to change X to suit your case.


I’ve done something like this: create dataset object by calling “from_tensor_slices” function on X. Then map the brightness_augment function on X which applies random brightness to the images. Then I shuffle the images and create batches. Then I call the repeat() function which would repeat the dataset REPEAT number of times. Finally prefetching.

def brightness_augment(image, labels):
  image = tf.image.random_brightness(image/255, 0.1)
  image = tf.clip_by_value(image, clip_value_min=0, clip_value_max=1)
  return (image, labels)

def create_dataset(X, labels, REPEAT, is_training):
  dataset =, labels))
  if is_training = True:
     dataset =, num_parallel_calls=AUTOTUNE)
     dataset = dataset.shuffle(buffer_size=SHUFFLE_BUFFER_SIZE)
  dataset = dataset.batch(BATCH_SIZE)
  if is_training = True:
     dataset = dataset.repeat(REPEAT)
  dataset = dataset.prefetch(buffer_size=AUTOTUNE)

  return dataset

You only said it doesnt work, so what exactly is the problem? is it that the final dataset doesn’t have enough number of samples, or some of the samples are repeated?

1 Like

The first issue is that the AUC on validation set has decreased by almost 4 times as compared to the case when I didn’t do any data augmentation, which is counter-intuitive as data augmentation must give similar performance if not better.
Secondly, I am not able to figure out some mathematics when I did My training dataset has 10,000 images. I used a batch size of 32 and REPEAT = 2. While doing, I set steps_per_epoch = REPEAT * 10,000 // Batch_size
So while training, there must be total 625 batches in one epoch, but the output shows a total of 3123 batches.

Hello @Harshit1097,

I won’t talk about with you now, and I won’t talk about model performance with you now.

The only thing I am interested here is how you generated the data. Let me ask you a few questions:

  1. How many samples do you have before augmentation?
  2. If you run a loop over the augmented dataset, in the loop, take the number of samples out, accumulate that number, and at the end, how many samples do you have in total?
  3. You use repeat after random_brightness. What difference do you expect if you swap the order of them?

Hello @rmwkwok.

  1. I had 10,000 samples before augmentation.
  2. By running through the augmented dataset (repeat = 2), I found that there were total 625 batches (i.e. 20,000 samples since batch size = 32) which is the expected number.
  3. I believe If I apply random_brightness after repeat then the augmented images (20,000) will go through random brightness changes, which I think is exactly what I wished to have. I think this is the solution I was looking for :sweat_smile:


Ok. From 2, it seems to me you have rethought something and done some work in your code to get expected results. Great work!

I had to focus on the augmentation part in order for us to confirm that it is delivering expected results. Otherwise, it doesn’t make sense to move on to anything that based on it.

Even though sometimes we can’t control our passion to immediately see some training results, no matter the results are good or not, it is always a good practice to carefully and closely examine each small section of code to check that it is delivering what you want it to. It is also possible to consider training only one epoch with less steps, and just to see if something is wrong.


Thanks @rmwkwok. I understand your point that we need to look at micro things first before jumping on to the result.