Am I the only one doing "+1" to the number of output units?

For the final dense layer of my model I’m doing units=len(np.unique(training_labels))+1 (if I don’t put the +1 I get a nan loss and very low accuracy (in the region of 0.04). Why does the model need the +1? Is it because there’s a gap in the labels? I noticed the number of unique values “skips” number 9 (see below)
np.unique(training_labels) yields array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24.]), which has a length of 24

Thank you for your thoughts on this

For an input image, the last layer predicts the likelihood of assignment to a particular class.
The maximum number we have for a class label is 24 in this case. So, the last layer needs to have 25 units.

so it assumes the label encodings are without gaps, right?

That is correct. Use np.max

1 Like

Thank you for the explanation