Complementary questions for ChatGPT on the "Bayes Error" 😂

So I was interested by the little note at the bottom about the “Bayes error”.

ChatGPT was asked:

What is the Bayes error?

and then, after digesting the answer, ChatGPT was asked:

The key is “assuming perfect knowledge of the probability distribution of the data”, which means that given X, the input, we can look up the probability distribution of P (Y = y| X) somehow. Isn’t this idea epistemologically very suspect?

and then, after digesting that answer,

It feels like the perfect learning Turing Machine AIXI, which runs an algorithm that is based on a noncomputable number.

Much instructive fun was had, with the conclusion:

Bayes error needs infinite data (or a perfect oracle) to determine the best possible classifier.

I’m probably missing a lot of the point of your post (which I’m guessing is meant more to be about the joys of chatting with ChatGPT), but where does Prof Andrew ever say that the Bayes Error is actually computable?

We’re doing math here: sometimes there are things that can be proven to exist, but the proof gives you precisely zero help in actually finding the aforesaid provably existent thing. That’s not a perfect analogy for the Bayes Error specifically, just making the general point about math theorems. The UAT is maybe the better instance of my first sentence in this paragraph.

1 Like

Actually Prof. Ng doesn’t say anything at all about the Bayes Error, except that this concept exists.

But once it is explained, it becomes clear that this one of those perfect concepts that just can’t be had but are useful in argumentation.

1 Like

Cool! Then I think we are saying the same thing. :nerd_face:

1 Like

We certainly do! :sunglasses:

2 Likes