No, b (the bias term) is always a scalar in Logistic Regression. That will no longer be true once we get to real Neural Networks in Week 3. Adding a scalar to a 1 x m row vector simply adds the same value to each element of the vector. This is a trivial example of what is called “broadcasting” in numpy. Here’s a thread which gives examples of that.