Week - 1 vanishing/ exploding gradient

In the lecture, sir said that if W is a little bit bigger than the identity matrix with a very deep network, the activation can explode.
Then what would happen in the case a shallow Network is used?

It might not explode! There may not be eniugh layers so the gradients increase so much but you never know.