NN layers with several units

Hi everyone,
I encountered a question that bother me; Probably it will be answered during the course, but still:

While, for instance, the first layer outputs vector “a” with 5 values, and the second layer, which has 3 neuron units, takes it as input. What’s the difference between each logistic regression within the layer( how do those 3 neuron units differ)?
Since I thought they should be completely the same, since they all minimize cost function, no? Or do values of parameters (w,b) differ somehow from neuron units within one layer?
may be I misused some terminology, I would appreciate pointing it out as well as an answer to this question.

Yes, they are different and each contributes to minimizing the cost function.

but how? I mean, shouldn’t it be the only 2 true values(w which is actually may be vector, and b) to reach the global minimum?

I am talking about W which is an array and has different values for all w_n. The same for b.

In a neural network with one hidden layer, there will be two weight matrices.

W1 connects the input layer and the hidden layer.

  • It’s size is the number of input units by the number of hidden layer units.
  • There is a bias weight for each hidden layer unit.

W2 connects the hidden layer and the output layer.

  • Its size is the number of hidden layer units by the number of outputs.
  • There is a bias weight for each output unit. If there is 1 output, then W2 is a vector.

Note that the Weight matrices may be transposed, it depends on how the assignment was defined.