hi,
in one of the videos (here https://www.coursera.org/learn/machine-learning-modeling-pipelines-in-production/lecture/hBfzT/measuring-fairness at 2:58) lecturer defines the AUC as “the percentage of datapoints, which are correctly labeled when each class is given an equal weight independent of the number of samples”.
I used to consider AUC as a measure of how well predictions are ranked, and if I look at Wikipedia, AUC is defined as “the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming ‘positive’ ranks higher than ‘negative’).” , which I think is a bit different from the lecturers definition at least because AUC in wikipedia definition doesn’t require prediction threshold, which separates classes, to be defined.
Do you think that wikipedia definition and the one given by the lecturer align well/are consistent with each other?