Does LLM give true reasoning of its analysis or just making things up?

When I ask an LLM to explain the rationale behind its previous answer (in the same prompt thread), does it provide the true reasoning process, or is it generating a post-hoc rationalization?

If it’s just bullshitting, then doesn’t it mean that LLM is not suitable for any kind of serious analysis tasks?

That is perhaps the most insightful question I have seen on the forums, in a very long time.

Personally I do not have an informed opinion, as I have not studied the inner workings of this technology.

I am very interested in reading replies from those who know.

The LLM has learned from human discussions that include some form of true reasoning, this is where the LLM has learned from, so its output is also some form of true reasoning. Also these LLMs use reinforcement learning as part of their training process to align and reward the human similar responses.

However the nature of these is generative and probabilistic therefore they might include an element of chance, un-rational for the way the human mind has been conditioned.

1 Like