Week 1 LLMs as a thought partner

In this video, Andrew mentioned that GenAI or LLM can hallucinate. But he didn’t elaborate further. I wonder what caused it to hallucinate, and what we can do about it?

Hallucination happens because of how neural networks work. Basically, NN’s are transformations of inputs to some output by a “function” that the network learned during its training phase. If an input is fed into the NN (neural network) that the it has not seen before, it will use the learned “function” to give an output, but sometimes that output doesn’t make any sense - hence hallucination.

If you take some further courses here, you will understand the process more in-depth!

1 Like