Have another look at the architecture of the network that we are implementing here. When you cascade two conv layers back to back, just because they have the same hyperparameters does not mean that it doesn’t make sense or that it’s a NOP. You are applying another conv layer to the output you get from the first conv layer.
Then we optionally have a dropout layer and also optionally a max pooling layer.
The point is that Keras “Layer” functions (subclasses of Layer
) take a set of hyperparameters and then return you an instance of the actual function, which you then invoke with a tensor as input and it produces a tensor as output. In this instance the result of this call:
MaxPooling2D(pool_size=(2,2))
is a function. Then you invoke that function with the input tensor conv
and it returns an output tensor.
Please see this thread for a good explanation of how to use both the Keras Sequential and Functional models.