W2_Video Lecture_Vectorizing Logistic Regression

I don’t understand how np.dot(W.T, X) work on Numpy.
why W should to transpose and how dot work on this?

Could someone deep explain about this?

Thank u a lot :robot: :star_struck:

Hi there,

np.dot(x, y) just multiplies 2 matrices, but to multiply these 2 matrices here you need certain sizes and the transpose is doing exactly that, otherwise it will give an error.

Check this video for matrix multiplication:

Thank a lot for your explain.

Could i ask another question the first time W get what a shape ? why should to transpose to match for X matrices.

Thank u :star_struck:

Ok Prof Andrew explains this very clearly here

In case of logistic regression both w and x have shape (n, 1) (for one example with n features) but in order to multiply them one needs to be transposed hence W.T.

In case of many examples both w and x are of shape (n, m) and again W is transposed.

1 Like

Thank u so much now I understand.

1 Like