week-3
Is (nx,m) basically, (Number of features, training examples)
Therefore, basically, hidden units = features, and total examples is how many training set you have?
Is (nx,m) basically, (Number of features, training examples)
Therefore, basically, hidden units = features, and total examples is how many training set you have?
Yes, the data is represented with “samples” as the second dimension. The inputs are vectors of dimension n_x x 1, where n_x is the number of features and then you have m of those vectors. In the example where the inputs are 64 x 64 RGB images, that means each input vector has 64 * 64 * 3 = 12288 pixels. Then in the hidden layers, the input vectors will have the number of entries equal to the number of output neurons in the previous layer. So it’s not quite right to call those “features” anymore: they are the outputs of the previous layer neurons, so they aren’t really “pixel” values anymore (in the case that the inputs are images).