Question on the "Using a pre-trained classifier"

At 4:07 of the Feature Extraction video at : https://www.coursera.org/learn/build-better-generative-adversarial-networks-gans/lecture/xJvev/feature-extraction

It is stated that the size of the pooling layer, ie: 100 nodes, determines the number of features that the previous convolutional layers can detect. I am wondering if there is a case of very weak weights, or sparse connections between the convolutional layer and the pooling layer? IE, could some of the weights be very close to 0, and therefore not actually contribute any additional data? Say 100 Pooling layer nodes are used in the network but only say 30 of them have significant values?

Yes generally speaking in neural networks that happens!