Inference: making predictions (forward propagation)

Hidden units are initially higher in number, and then it starts to decrease as you get closer to the o/p layer.
So this happens in both forward and backward propogation algorithms always?
Is the reverse thing also possible?

Can you remind me, where is this this information in the course?

4:36 at the video lecture of Inference: making predictions (forward propagation)

I did not find a video lecture by that title in MLS Course 2 Week 2.

Week 01.

You’ve posted in the Week 2 forum area. That’s why I was confused.

Backpropagation does not change the number of units in each layer.

Backpropagation isn’t a separate NN method - it’s just how the gradients are computed so you can train a NN.

It’s discussed in more detail next week. Essentially, backpropagation starts by computeing the error in the output layer, and uses those errors to compute what the errors and the gradients in the previous layers, working from right to left in the diagram.

For forward propagation, does decreasing always happen?

Not always. The sizes of the hidden layers are determined by what gives the best performance. All hidden layers might be the same size. Sometimes you might also have more units in the 1st hidden layer than there are input features.

1 Like

Thanks Tom, I was looking for this response.