Inception Network - Question

In an inception network where we add multiple types of layers and let the network learn the parameters, is it possible to identify which layer(s) are the most relevant, so that the other lesser or no use layer(s) can be eliminated to save up on performance?

I don’t know if there is any special research specifically into inception networks, but there has been general research on eliminating params (the “less relevant” ones) to improve the runtime performance of AI models.

The technique is called neural network pruning, and you can read this paper (published in 2020) if you’re interested in learning more. There’s also a tutorial on how to implement this in pytorch as well.

3 Likes