The slide shows that if you only have a linear unit in the hidden layer, then mathematically your entire NN is equivalent to just plain-old linear regression.
Using a non-linear activation function in the hidden layer is essential for an NN to learn complex combinations of the input features.