Chance of accidently correctly classifying in a binary classification problem

Since in a binary classification problem there are only two outputs, can’t even an untrained model correctly classify half of the training set ? If so, doesn’t this affect the accuracy of the model?

Yes, even a random choice would be correct half the time.

No, this doesn’t affect the accuracy.

Right! Or to give another example, if you’re predicting the results of a coin flip, you can predict “Heads” always and you’ll be right roughly 50% of the time in a reasonably sized sample, but how is that interesting? 50% accuracy is not going to be useful in most applications. :smiley:

What i meant was that, since half of the results on the training set is correct only by chance, wouldn’t this render half of the training set useless, say as compared to regression, since the false positives do not improve the model

No, it does not. All of the examples are correctly labeled, so we can learn the weights and biases that give the best predictions.

Yes, the point is that the false positives are useful because they generate very large error values which in turn will generate large gradients that push the parameters in the direction of a better solution. Of course it may take quite a few iterations of training to get to correct predictions, but this is the fundamental way that training (“machine learning”) takes place.