Data augmentation technique does not augment the number of training set


datagenerator with augmentation gives us exactly the smae number of training and testing examples of the originals sets before applying augmentation technique so why in the course they say that :

you looked at the really useful tool that TensorFlow gives you with image augmentation. With it, you can effectively simulate a larger dataset from a smaller one with tools to move images around the frame, skew them, rotate them, and more. This can be an effective tool in fixing overfitting. ???

data generator with image augmentation helps in fixing data overfitting by increasing the diversity of your training set by applying random (but realistic) transformations, such as image rotation.

Do you have some proof of this statement?

sorry it was a mistake from my side when i saw the training examples number and the validation examples number did not change before and after image generator ( see the colab result) but in fact the number of training examples is augmented directly in memory and we cannot not see the exact amount of augmentation