Happy to explain!
Bayes error doesn’t directly determines bias or variance, but is usually used when the variance or bais related to dataset is not so minimal but not too high and yet we get 99% model prediction.
So at that time Bayes error analysis helps us compare our data analysis hypothesis based on what result we got than what the true value really exist.
Suppose the Bayes Error is non-zero, then the two classes/events/hypothesis have some overlaps, and even the best model will make some wrong predictions by assumption. (This statement even prof.Ng mentions in the course videos but I am not sure which particular video)
There are many possible reasons for a dataset to have a non-zero Bayes Error. For example:
Poor data quality: Some images in a computer vision dataset are very blurry.
Mislabelled data: The labelling process is inconsistent.
incorrect division of dataset/less data
So imagine we are getting a 99%prediction for model we created based on A feature but there is either high variance or bias between dataset.
See the below image
So now the Bayes error would basically check if the dataset has a B feature which also gives similar predictions with the presence or absence of A feature. if the result error result is non-zero.
The reason for such error analysis is usually because sometimes after a model is created and then trained shows great results but when tested with similar dataset, model performance is poor or not as the training results.
at this time we can be sure that the first model which showed 99%prediction is overfitting or doesn’t have enough data to give reduced variance or bias with the result 99%model prediction
Regards
DP