When I ask an LLM to explain the rationale behind its previous answer (in the same prompt thread), does it provide the true reasoning process, or is it generating a post-hoc rationalization?
If it’s just bullshitting, then doesn’t it mean that LLM is not suitable for any kind of serious analysis tasks?
The LLM has learned from human discussions that include some form of true reasoning, this is where the LLM has learned from, so its output is also some form of true reasoning. Also these LLMs use reinforcement learning as part of their training process to align and reward the human similar responses.
However the nature of these is generative and probabilistic therefore they might include an element of chance, un-rational for the way the human mind has been conditioned.