Question regarding week 2(Machine Learning Specialization, Advanced Learning Algorithms)

Q1. why did ‘w’ has 2 parameters and ‘b’ has only one for every neuron ?
(Topic in week 2: Neural network implementation in Python → Forward prop in a single layer (video time : 3:00)).
Q2. I have gone through the results of multiplication and dot product of vectors gives different results when calculated. I know we used dot products for improving performance but the actual thing is to multiply the values in logistic regression or activation function
EX: f(x) = w*x + b (normal multiplication) and f(x) = w.x + b (dot product) gives different results then why did we use dot product ?

Q1)

  • w has one value for each feature. It can be a vector if there is more than one feature.
  • b has one value for the unit’s activation. It is a scalar.

Q2)

  • When w and x are vectors, use a dot product.
  • If w and x are scalars (i.e. there is only one feature), then * can be used (it is a scalar product).
1 Like