First of all, I am not kind of native speaker who learned English via mother tongue, so I should wish your generosity.
I found a curious bug in the first coures of the “ChatGPT Prompt Engineering for Developers”. I had gotten an output at the line describes " We can fix this by instructing the model to work out its own solution first.":
Is the student’s solution the same as actual solution just calculated:
…
Yes
Student grade:
Correct
This output is incorresponding with the courese’s video. Checking sevral things to correct the output by add or remove some strings from the prompt, I found that the response being correct by only one line-break. I think it would be helpful for me to understand the mechanism of 3.5-turbo’s prompt comprehension if any one would explain the reason of these symptoms. Although I’ve read the NOTE that OpenAI updated gpt-3.5-turbo, it did not help me much.
For detail, the original prompt of the course:
Student grade:
correct or incorrect
Question:
…
The following result:
Is the student’s solution the same as actual solution just calculated:
…
Yes
Student grade:
Correct
…
The changed prompt(add or remove one or more line-break between "correct or incorrect
" and “Question:”):
Student grade:
correct or incorrect
```(more than 2 line-breaks)
Question:
OR
Student grade:
correct or incorrect
```(remove line-break from the original)
Question:
The following result:
Is the student’s solution the same as actual solution just calculated:
Rewording the last prompt to “Format your response as a comma-separated list of items.” returns the expected result:
Government survey, Job satisfaction, NASA, Social Security Administration, Employee feedback.
What could have triggered chatGPT to ignore the formatting in the original prompt?
“Format your response as a list of items separated by commas.” seems to me perfectly acceptable.
I also encountered the same symptoms while I had been passing this course then. If I remember those correctly, my, kind of, solutions are different from what you’ve found.
Anyway, glad to see that it’s just not my problem.
These might be also the reason for pursuing iterative prompting.
I suppose that one should be proficient in some set of variations of LLM/SLM by iteration whenever the language model is updated.