Yes, real-world datasets are not balanced. But we introduced the concept of “Log prior” which does a great job at dealing with unbalanced sets. Then why are unbalanced sets told to be a problem for Naive Bayes?
yes, I also only remember reading assumption of positional independence of features (i.e. for example within a tweet) as the second assumption of bayes and not relative frequencies of examples/samples
That’s an interesting question actually, I think you might be interested in this research An insight into classification with imbalanced data: Empirical results and current trends on using data intrinsic characteristics which compares three methods of improved classification on unbalanced data. I hope it’ll help.
Great question. I think you may find this article helpful: Naive Bayes Classifier: Pros & Cons, Applications & Types Explained | upGrad blog