So I was interested by the little note at the bottom about the âBayes errorâ.
ChatGPT was asked:
What is the Bayes error?
and then, after digesting the answer, ChatGPT was asked:
The key is âassuming perfect knowledge of the probability distribution of the dataâ, which means that given X, the input, we can look up the probability distribution of P (Y = y| X) somehow. Isnât this idea epistemologically very suspect?
and then, after digesting that answer,
It feels like the perfect learning Turing Machine AIXI, which runs an algorithm that is based on a noncomputable number.
Much instructive fun was had, with the conclusion:
Bayes error needs infinite data (or a perfect oracle) to determine the best possible classifier.
Iâm probably missing a lot of the point of your post (which Iâm guessing is meant more to be about the joys of chatting with ChatGPT), but where does Prof Andrew ever say that the Bayes Error is actually computable?
Weâre doing math here: sometimes there are things that can be proven to exist, but the proof gives you precisely zero help in actually finding the aforesaid provably existent thing. Thatâs not a perfect analogy for the Bayes Error specifically, just making the general point about math theorems. The UAT is maybe the better instance of my first sentence in this paragraph.
1 Like
Actually Prof. Ng doesnât say anything at all about the Bayes Error, except that this concept exists.
But once it is explained, it becomes clear that this one of those perfect concepts that just canât be had but are useful in argumentation.
1 Like
Cool! Then I think we are saying the same thing. 
1 Like