The Guidelines video says (minute 15) that giving instructions to calculate first the solution and then compare it to the student solutions should give an assessment of Incorrect solution to the student, but when I run the Jupyter notebook, it says it’s correct again. See screenshot as proof:
Which course are you attending?
You’ve posted in a general discussion area, not specific to any course.
Welcome Anna, Thank you for posting a good question.
As your instructor pointed out, allowing the chat model more time to process can often lead to better outcomes. Ideally, with “step by step reasoning,” your model should have identified the error in the student’s maintenance fee calculation. Unfortunately, it missed that.
It’s important to remember that Large Language Models (LLMs) aren’t infallible. They operate based on probabilities, meaning there’s always a chance they might not produce the desired output. Could you give it another go? Maybe this time, it’ll correctly point out the mistake in the student’s answer.
It’s worth noting that newer models are continually being developed, with improved accuracy and performance. As these models evolve, the likelihood of getting a correct response increases.
P.S.
As TMosh mentioned, you should post your question in the appropriate section. Your query pertains to the Short Course ‘ChatGPT Prompt Engineering for Developers.’ Please keep that in mind.
Thank you both. I’m sorry I posted in the wrong place. It was not clear to me the organisation of these forums. I’ll do it better next time.
Regarding the problem. I just executed the code that it’s provided in the course “ChatGPT Prompt Engineering for Developers” so that’s why it is odd to me that it should give a different answer than what is expected. The sections teaches you to give the model more time to answer but looks like the provided solution does not give enough time. I wonder how probability can give different answer for the same exact question and context.
I have moved your thread to the correct forum area.
I experienced this same phenomenon. Based on the output from the provided code in this course, the model correctly calculated the model, but failed to correctly compare its own function to the student’s function.