The dimension of the second layer W[2] is (1, 4), and of b[2] is (1, 1), as shown in the screenshot below, circled in red.

Why is that?

What are w[2] and b[2]?

We have a 2 layer network here. The input has 3 elements. The first layer has 4 neurons. The second layer has 1 output neuron.

So that means the dimension of W^{[1]} needs to be 4 x 3 and b^{[1]} is 4 x 1.

Then for the second layer, we have 4 inputs and one output, so W^{[2]} needs to be 1 x 4 and b^{[2]} needs to be 1 x 1.

This is all determined by the “dimensional analysis” on the two “linear activation” equations for the two layers;

Z^{[1]} = W^{[1]} \cdot X + b^{[1]}

and

Z^{[2]} = W^{[2]} \cdot A^{[1]} + b^{[2]}

Thank you so much! So the reason for the dimension of W[2] is the way the picture is drawn, correct? If the example had three neurons in the first layer and two neurons in the 2nd layer, the dimension of W[2] would be (2, 3), and the dimension of b would be (2, 1), correct? Just wanted to ensure that the given dimension is about the specific figure, not a general rule.

Yes!

Cheers,

Raymond