In the Guidelines module of the ChatGPT Prompt Engineering Course, there is a guideline that talks about " Tactic 2: Instruct the model to work out its own solution before rushing to a conclusion". The example provided along with it shows that ChatGPT initially provides the incorrect output, but after we ask ChatGPT to work out the solution on its own, it provides the correct output.
My question is why doesn’t ChatGPT provide the correct response in the first place as it is a fairly simple and straightforward problem that it is being asked to evaluate.