After completing Course 4: Convolutional Neural Networks, I feel confident applying CNNs to real-world problems and have started building a model for a practical image classification task involving four classes.
One challenge is that the region of interest is in motion, causing motion blur specifically in that area. The rest of the image is static and clear. Despite the blur, humans can still recognize and image the object correctly, which gives me hope that a model can do the same.
I’d appreciate guidance on the following:**
- Should I crop to the motion-blurred region, even if it’s degraded, or retain the full image for context?
- Are there recommended preprocessing or data augmentation strategies to help the network learn from blurred inputs?
- Are there off-the-shelf models that are known to perform well in cases of partial image degradation (e.g., motion blur) where transfer learning would be effective?
- Any suggestions on architectures that handle this kind of distortion well?
Thank you so much