tf.keras.layers.BatchNormalization()

Hi,
the resnet architect uses tf.keras.layers.BatchNormalization(). If we are normalizing the input why we need to normalize during the training?
also we use relu layer separately, can’t we use it as activation in conv2d or BatchNormalization ?
thanks,

The batchnormalization will also happen during training. The purpose of it, is to normalize features so the process of training and convergence happens faster and easier.

It could be possible to use an activation within the conv layers.

1 Like

The Relu layer is after the BatchNormalization, if you use it as an activator of the Conv2D layer, instead of as a Layer, it will succeed before the normalization.

Sometimes the BatchNormalization is used to reduce the risk of overfitting, and as gent says it increases the stability and performance.

2 Likes