Hi,

When I looked at the weight matrix dimension in TensorFlow, it looks like it follows [n_l-1, n_l] where ‘n_l-1’ is the input layer and ‘n_l’ is the hidden layer dimension.

From lectures, I was thinking of it should be [n_l, n_l-1]. so is TensorFlow taking the transpose of weights during calculations or elementwise multiplication with input? or I am missing something?

I just tried to manually take the dot product between inputs to initiate the weight matrix and match the outputs with the model prediction, they are the same. I am somewhat confused with the dimensions in TensorFlow. from initial lectures I thought the input dimension is (n_x, m) but in TensorFlow, it seems (m, n_x), where ‘n_x’ is input length (here 4 ) and ‘m’ is a number of examples.

Thanks