How do neurons in the same layer ‘coordinate’ to learn similar features?

In Transfer Learning lesson under the section “Why does transfer learning work?“ it is shown that how different layers are responsible for carrying out different tasks which helps the model to classify the images as shown below. Similarly in Coffee Roasting example as well we can see how different layers concentrate on different parts of the graph to learn overall distribution.

I’m very much curious to know how did each layer split up the task and assign to each other and what’s more interesting is how did all neurons within one layer have decided to acheive one common goal (for example all neurons in first layer deciding to detect edges).

To be more clear:

  • Although there is no direct link between neurons within the same layer, n⁽¹⁾₁ & n ⁽¹⁾₂ how did they both coordinated to detect edges? While n⁽¹⁾₁ had decided to detect edges n ⁽¹⁾₂ could have decided to detect curves but it is not and this coordination really amazes me :see_no_evil_monkey:.
  • In simple english, without someone telling the model that “hey, listen all neurons in Layer1 have to only concentrate on one task. It is not allowed that one neuron will concentrate on edge, one on curves and one on corners. It’s upto you guys to pick which subtask you do, but all of you in Layer1 should have one common goal“. How did the neurons within a layer came up with this coordination to achieve common goal.

The units do not cooperate.
They each have a different random initial weight.
Then from that starting point, each unit modifies its weight so as to minimize the cost.

1 Like

What I’m interested in knowing is how the neurons from same layer are all tasked to acheive the same goal.

They aren’t.