C2-Week3-Transfer Learning

When creating a post, please add:

  • Week # must be added in the tags option of the post.
  • Link to the classroom item you are referring to:
  • Description (include relevant info but please do not post solution code or your entire notebook):
    In the Transfer Learning lecture why has Mr. Ng in describing the parameters from the pre trained model as w and b (with a vector on b). My question is why is there a vector on b, when usually there is a vector on w.

The biases are represented as vectors (denoted with a vector on ‘b’) because each neuron in a layer of a neural network has its own bias. Therefore, for a layer with multiple neurons, there is a bias term for each neuron, forming a bias vector. This representation aligns with the architecture of neural networks and simplifies mathematical operations.

I hope this helps

1 Like