Automating labeling process for supervised learning

From my experience, one of the bottlenecks when carrying out a project for supervised models is the reliability in labeled samples. It is not easy to have a team of labelers, any practical advice for making this process more efficient/reliable than doing it manually? Maybe looking at the attributes distributions per label type to find samples falling outside the assumed distributions?