I am working on a sign language classification problem and I have been provided with a dataset containing hand signs. Upon exploring the data further I realized that a couple of the images were out of focus. To make matters worse it wasn’t the entire image that was out of focus but just certain parts of the images where the hand signs were located are as if it was deliberate. Is it a good idea to enhance those images and what methods can be used to do that. Or should I just leave those images as they are hoping my neural network can still learn some patterns from it. I have attached one image for your reference. Thanks in advance.