Andrew says this w.T matrix is a (4,3) matrix in W3 video Computing a Neural Network’s Output at 4:00.

How does he conclude that this matrix has 3 columns?

Andrew says this w.T matrix is a (4,3) matrix in W3 video Computing a Neural Network’s Output at 4:00.

How does he conclude that this matrix has 3 columns?

In this specialization, number of rows of W is equal to the number of neurons of the current layer, and number of columns is equal to the number of features or neurons of the previous layer. So, what is the number of neurons and features in the attached image?

3… so w has one column for each of the features (x1, etc) in each node of each layer?

Yes, that’s correct. Of course we then have to extend this idea to the later layers in the network. For W^{[2]}, the number of columns will be the number of output neurons from layer 1.

Here’s a another thread which discusses the material on that slide.

Got it, thanks