Is 1 week of course mostely inactual with new tf versions?

Performance is one of the main reasons for deprecation of tf.keras.preprocessing layers.

For instance, when using image_dataset_from_directory, augmentations are done in layers within the model which internally uses GPU. ImageDataGenerator uses CPU to perform augmentations prior to feeding data into the model.

I’ve observed a good amount of difference when it comes to training time when using the newer recommended APIs.