Is it just me or you too have noticed these issues with ChatGPT model?

Hello everyone, I am using chatGPT these days and I have noticed several issues and aberrations in its responses. It behaves weirdly which makes it untrustworthy. Here are some issues:

  1. Model is easily contradicting itself: If you ask something it will give you the response but if you say that this is not how it should be then it will change its response to agree with your viewpoints even if it has to lie, contradict or go against the facts. I don’t know if it is due to an inherent problem with the model or the way it is trained using reinforcement learning from human feedback. This can be fatal in some cases.
  2. The model deviates from its initial context after some time: I have noticed that if you continue to use the model for a very long time within the same chat context, it starts to deviate from the initial context that you were discussing. You will have to reinitiate a new chat to get a meaningful reply within the context you were previously discussing. It just fails to keep up the chain of discussion after some time.
  3. The model avoids answering things in depth: Another thing that I have noticed is that the model does not like to explain things in depth. It feels like it is running away from explaining things. It feels like it is just like a lazy person trying to run away from the job. For a beginner trying to learn something new, it becomes very challenging, no matter the prompt size.

There are other issues that I am not able to recollect at the moment but I guess these issues make chatGPT a bit unreliable and can’t be trusted in many cases especially when it contradicts itself to agree with my viewpoints.

Have you experienced the same issues? What’s your experience with chatGPT?

It’s just a language model, not a thinking machine.
Do not expect too much logical behavior from it.