Do LLMs hallucinate or confabulate?

So I came across an article where a neuroscientist expresses that LLMs confabulate and not hallucinate as confabulate is more related to false imagenery expression of information.

I don’t agree to the above statement as I still feel hallucination is the right word for LLMs tends to forget or not learn all the computation provided rather than false expression.

I wanted to get views from our community here, what they have to say which holds better word when comes to LLMs providing incorrect information, do they hallucinate or confabulate?

Including some of the mentors i look forward to @ai_curious, @paulinpaloalto, @arvyzukai, @Honza_Zbirovsky, @lmoroney

Regards
DP

2 Likes

Here are definitions I found on the websites of two pretty well regarded US medical organizations, Cleveland Clinic and the National Institutes of Health.

A hallucination is a false perception of objects or events involving the senses: sight, sound, smell, touch and taste. CC

Confabulation is a neuropsychiatric disorder wherein a patient generates a false memory without the intention of deceit. NIH

The latter seems like a better choice to me, at least based on this admittedly tiny sample.

This is also interesting from the same NIH page…

Two types of confabulation can be distinguished. Provoked confabulations can be discovered by directly questioning and prompting a false memory.[5] This type commonly correlates with an impairment in autobiographical and semantic memory such as dates, places, and common history. For example, one asks the patient, “Who was the forty-fourth president of the United States?” the patient would then reply incorrectly instead of responding with “I don’t know.”

This seems like what LLMs do…provide false information as part of the response to a question.

2 Likes

The reason why I felt confabulation doesn’t justify in relation to LLMs learning would be more related to mentioned as an illness or disorder where hallucination term it as false perception based on an event.

LLMs when provide incorrect information due to inability of learning every part the model cannot be terms as relative to false memory rather false or incomplete understanding of an event.

I think making stuff up when expected to answer correctly is indeed an illness, whether it’s a human or a machine doing it :slightly_smiling_face:

5 Likes

But here LLMs are not making up stuff, they are providing information based on what they have learnt from the model which can be incorrect same as human endorsing an information based what kind of knowledge or information they have got from. This cannot be illness rather intellectual or machinery misunderstanding!!!

The fact that the LLM does not understand but thinks it does is the illness I believe.

1 Like

Funny! But you’re raising a great point.

I think an argument can be made that this is exactly what LLMs do - all the time. Many times what they make up, eh, generate, is consistent with their training data. Other times it isn’t. In both cases they are making assertions of truth, but when it’s not actually true, that just seems to me to align better with the medical definition of confabulation. I think that an important takeaway is there is no guarantee that what an LLM outputs is accurate or true, regardless of how we label that phenomenon.

3 Likes

I think it’s concerning that we continue to reappropriate words for similar-type meanings but very different in context. I also think that we are attributing human-like qualities that just don’t exist. LLMs don’t have senses nor memories. They have data. One day that may all change - but not today.

1 Like

A “patient” ? only human been can be a patient.

Can LLM validate its answers before giving them out? I guess “not lying” is the thing that humans are being taught at the very young age by parents or in school. And unless I am lazy, even when I feel I am right and accurate, I still try to get links for every fact, because there is no reason for people to take my word for granted.

Not every proof should be a link. There is a core that is known as “logic” that stems from “not lying”, but logic is too formal term. Humans are able to use “common sense reasoning” to know what is true and works without going to university and taking a course in formal logic. Even if such course and playing debate games improve people ability to get down to the “point of conflict”, there are also a lot of streetwise folks who develop reasoning as a survival skill in their environment.

Actually, just having these two words for drawing differences and comparing to what is going on, is very helpful to understanding of how LLMs work.

Does llm really hallucinates or adapts to the new data fed to the llm just like human brain gets better with more knowledge and experience?

The reason I asking this when it came to like GPT evolution like in the latest version introduction image to text or text to image data were included including audio inclusion with addition of emotions (lol that’s how it was endorsed in an interview :joy:). So I wanted to know from anyone who has worked upon llms think llm forgets the previous data or adapts to the new data where as keeping the old data also in check, I suppose because a newer llm is fed with data which is probably assigned with what is true or label, it is trying probably improvise it rather than forgetting the previous data??

@Deepti_Prasad I think part of the issue here is also in the terminology towards the general perception of the technology.

I mean, basically, the LLM makes up ‘knowledge’ and makes mistakes-- But that is not a ‘sexy’ thing for LLM companies to list on their sales pitch;

Yet ‘hallucinates’ both anthropomorphizes the perception of the tech, and seemingly provides, to the layman, a conceptual reasoning for ‘why it is doing it’.

So a big part, I think, is in the sales pitch.

I mean the machine is not conscious, thus the only thing it can do is ‘make errors’; But saying ‘hallucination’ makes it sound like there is something ‘conscious’ in there-- Which their isn’t.

1 Like