What’s the Most Damaging Annotation Pitfall You’ve Faced in an AI Project?

We recently analyzed some common (but often overlooked) pitfalls in data annotation that can derail AI model performance — from missing labels and midstream tag changes to annotator bias and oversized tag lists.

One mistake we’ve encountered often? Adding new tags mid-process, which throws off consistency and often requires re-annotating earlier data.

We compiled a list of top issues and practical fixes here:
Blog – Data Annotation Pitfalls & How to Prevent Them

Curious — which annotation issue has cost your team the most time or accuracy? And how did you fix it?