Training of hidden layers in neural network

I am in week 1 of course 2, and I didn’t quite understand how each neuron in the hidden layers gets trained since each neuron doesn’t have a target value to compare and improve the coefficient of weights and bias. While browsing I did come across an answer that mentioned saying each neuron is trained to contain random weights initially which helps them show unique behavior to ‘break symmetry’. I didn’t quite understand what that meant is there a simpler way of explaining this? or will this be explained later in the course?


If you confuse about how we compute the error in hidden layers .it will explained in next weeks and it is affect on the following layers ends to output layer so if the hidden layer doing poorly it will affect in the output layer so the gradient decent will provided way to minimise the error by updating weights …this resource will help you A Beginner's Guide to Neural Networks and Deep Learning | Pathmind

if you want more intuition I advice you to you take Deep learning specialization

Please feel free to ask any questions,

The weights are initialized to small random values. But during training, the weights are learned using a method called backpropagation of errors. This method is provided for you in this course, so you do not need to implement it yourself.