The shape of the gradient of an object is always the same as the base object. In the case of Logistic Regression, the weight vector w is formatted as a column vector. That is just a choice that Prof Ng made and the result is that both w and dw have dimensions (n,1). When we get to full Neural Networks, Prof Ng chooses to orient the weight matrices W^{[l]} so that the transpose is not required in the forward propagation formula. That is explained in more detail in this thread.
1 Like