How do different neurons and layers specialize?

I have completed the first two courses in the machine learning specialization, and am almost done with the reinforcement learning section of the third course.

However, I still don’t understand how different neurons and layers focus on different parts of the input data.

For example, in the lesson Example: Recognizing Images, Dr. Ng states that in the first layer one neuron might look for lines oriented in one direction whereas another neuron might look for lines oriented in another direction. Dr. Ng goes on to say that in the second layer the neurons will focus on identifying parts of a face at a larger scale.

I understand the basics of forward and back propagation, and I understand how in a convolutional neural network convolutions can be used to detect edges.

However, I still don’t understand how the neurons and layers know how to specialize into focusing on different elements. For the neurons, I thought initially that if each neuron is created with different random weights and b values then that could help explain the specialization, but I thought I read somewhere else that you can initialize all neurons with the same weights.

If anyone can help me understand how the neurons and layers know to focus on different things I would really appreciate the help!

The neurons don’t know what sort of pattern they are looking for. The weight values are simply adjusted to minimize the cost.

If detecting an edge, or a line orientation, is useful to minimize the cost, that’s just how the weights will turn out. There’s no pre-determined assumption about what the neurons will learn.

Andrew is just giving some intuitive context for how you might understand what the NN is doing, and why it gives useful results.

1 Like

Hello @Sashka_Warner,

I disagree that initializing neurons with the same set of weights guarantees neurons to capture different features. Please refer to this post for why neurons can differentiate. In the post, there is another link to a post with some mathematical explanations.

Raymond

2 Likes

Thank you so much for the reply, that’s helpful!

I might be missing the point, but if we continue the example of identifying edges and identifying facial components from an image:

Is it possible for neuron #1 in Layer 1 to identify edges, while at the same time neuron #2 in Layer 1 identifies facial components?

In other words, do neurons in a given layer always work at the same “scale”?

Hello @rmwkwok,

Thank you very much for the reply! That post you shared that shows the back propagation calculations is extremely helpful! I think that would be a great addition to the lecture slides!

Thanks,
Sashka

You are welcome, Sashka!

Hello @Sashka_Warner

Unfortunately, this is not something that we can consciously control. It is the magic of the math that decides all the pieces, so that they can all cohesively contribute to the output at the final layer.

We set the learning algorithm on a task to reduce the \frac {dJ} {dw}. We further allow it to apply the chain rule so that J can backpropogate to all the layers. From here on the math takes over. The end result being that different neurons learn different features (edges, parts of the image etc). In this manner, each neuron thereby contributes towards the creation of the final prediction.

1 Like

No guarantees about that. The layers learn whatever will give the minimum cost.

1 Like

It’s not really an example - it’s one of Andrew’s intuitive explanations for a very complicated topic.

Hello @shanup ,

Thank you very much for the reply! Got it - thank you for the help!

Best,
Sashka

Thank you very much for the follow-up reply! I see- thank you for answering my questions! I appreciate it!

Got it - makes sense! Thanks!