Saturating Image Classification Benchmarks

During C4 W2 lectures we learned about image classification models trained and evaluated on the ImageNet dataset. The plot of top-5 accuracy on ImageNet over time (link below) shows that AlexNet, VGG, and ResNet all contributed big gains, but progress has slowed considerably since late 2019 and currently around 99%. Are there newer image classification benchmarks that the current models aren’t yet able to saturate on accuracy?

Hey @Marco_Morais,
Why are we exploring other benchmarks, when even the existing models have still a lot to improve on the ImageNet benchmark. Although the accuracy is saturated on Top-5 predictions, but even the best models in this comparison haven’t move beyond 92% on Top-1 accuracy. And there are a lot of Image Classification benchmarks which you can find here, and explore them out. I hope this helps.


Hey @Elemento,
I see your point that there is still room for improvement in top-1 accuracy, but it still strikes me to see both the top-5 and top-1 benchmark stall. My point of view is coming from systems benchmarks such MLPerf where we expect to see improvements every year and the metric is a proxy for progress in the technology. I did find some other benchmarks such as ObjectNet that are still showing recent improvements.