Loss = Nan when training

Hello,
when I’m training my model, I’m having non number loss:

I don’t know how to debug this

Is your loss function correct?

you mean sparse_categorical_crossentropy?

oh gosh… yeah I was so convinced we were still doing rock paper scissors instead of hand sign recognition … replacing 3 by 26 did the job

1 Like

Just to comment on this a little further…
Yes, there are 26 letters in the English alphabet, but I noticed that there were 24 unique labels used. i.e. the letters J and Z are omitted. I’m not sure why, but because of this technicality and the fact that the last label is 24, technically, setting the output size to 25 should still work.
This is essentially a useless observation for our use case though.