Transpose Understanding

In Week 2 first video, Andrew says np.dot(w,x) is W transpose X.

Whare small x shape is (nx,1) and x shape is (nx,1).

Thanks in advance.

Both w and x have the same dimensions: n_x x 1. So if you want to do the dot product, you need to transpose w in order to get it to work. The rule for dot products is that the “inner” dimensions need to agree. The formula for the forward propagation of Logistic Regression is:

\hat{y} = sigmoid(w^T \cdot x + b)

1 Like