Understanding forward propagation

I think I got the idea behind forward propagation, that basically every neuron perfoms a logistic regression. One thing I didn’t understand is how you are supposed to get the parameters of each neuron, if you don’t have target values to apply gradient descent and all that was taught during course 1. At least that’s the idea I got so far and that’s why the layers are called ‘hidden’, because we don’t have the targets. I must mention that I’m just finishing week 1 of course 2, so maybe the answer is in later classes. Either way, thank you in advance.

Hello @Joao_Victor_Bertolon

You are right - There is no target value in the hidden layers and yet the weights in the hidden layers get updated during the learning process. This happens through a technique called backpropagation, wherein the derivative of cost at the output is propagated backwards all the way to the first layer.

There is a video on backpropagation - please take a look.

Hello @shanup

Thank you for you reply. I’ll make sure to check the video. I’m just starting week 2, so I’m guessing iit’ll come eventually. Thank you once again.