As a matter of fact, the original 2000 images won’t be touched with any changes, as image augmentation doesn’t require you to edit your raw images. They will be loaded into memory,
and there, the augmentation operations will be performed on-the-fly while training using transforms.
As a result, you will have more than 2000 images for training without impacting your dataset.
You will gain new images by performing data augmentation. It is a powerful task to avoid overfitting since you expose your model to different types of structural data.
Thank you for the reply, it’s very clear. Just for confirmation, the model will learn on both the original pictures and also on the transformed ones?
If so, how can I control how many transformations to perform on a single image?
I mean, who tells Tensorflow to apply just a rotation on the image instead of N rotations plus some shears?
Thank you!
Same question: how many new examples are generated by augmentation? The output display of model_for_aug.fit(…) shows 100/100 when each epoch ends. So it appears that the number of batches is still 100, the same as when augmentation is not used. Either TensorFlow adds additional batches behind the scene that are not reflected in the 100 count, or it increases the batch size so each batch includes more than 20 examples. Which is the case?