Hope to know reason of this bug(mightbe)

First of all, I am not kind of native speaker who learned English via mother tongue, so I should wish your generosity.

I found a curious bug in the first coures of the “ChatGPT Prompt Engineering for Developers”. I had gotten an output at the line describes " We can fix this by instructing the model to work out its own solution first.":
Is the student’s solution the same as actual solution just calculated:

Yes

Student grade:

Correct

This output is incorresponding with the courese’s video. Checking sevral things to correct the output by add or remove some strings from the prompt, I found that the response being correct by only one line-break. I think it would be helpful for me to understand the mechanism of 3.5-turbo’s prompt comprehension if any one would explain the reason of these symptoms. Although I’ve read the NOTE that OpenAI updated gpt-3.5-turbo, it did not help me much.

For detail, the original prompt of the course:

Student grade:

correct or incorrect

Question:

The following result:
Is the student’s solution the same as actual solution just calculated:

Yes

Student grade:

Correct

The changed prompt(add or remove one or more line-break between "correct or incorrect
" and “Question:”):

Student grade:

correct or incorrect
```(more than 2 line-breaks)





Question:

OR

Student grade:

correct or incorrect
```(remove line-break from the original)
Question:

The following result:
Is the student’s solution the same as actual solution just calculated:

No

Student grade:

Incorrect

Yes, I’m seeing the same thing!

If I run the cell below “We can fix this by instructing the model to work out its own solution first.” as is, I get the wrong response from the AI.

In my case, there is only a single line break above line 34:

>29 Student grade:
>30 ```
>31 correct or incorrect
>32 ```
>33 
>34 Question:

If I remove the empty line 33 and re-run the cell, the AI returns the correct response.
Weird!

Getting more weird results, this time in the l5-inferring notebook.

The original prompt ends with: “Format your response as a list of items separated by commas.”, but the response is a numbered list (see screenshot).

Rewording the last prompt to “Format your response as a comma-separated list of items.” returns the expected result:
Government survey, Job satisfaction, NASA, Social Security Administration, Employee feedback.

What could have triggered chatGPT to ignore the formatting in the original prompt?
“Format your response as a list of items separated by commas.” seems to me perfectly acceptable.

The lack of consistency is quite confusing.

OK so Andrew Ng explains that LLMs are unreliable in terms of formatting list responses, and advises to default to JSON format. Fair enough.

There are more examples that no longer work:

l6-transforming

prompt = f"“”
Translate the following text to French and Spanish
and English pirate:
I want to order a basketball
“”"

=> English pirate does nothing. To see the actual result, add double quotes:

prompt = f"“”
Translate the following text to French and Spanish
and “English pirate”:
I want to order a basketball
“”"

I guess we’ll have to get used to examples in this courses not quite working as expected :wink:

1 Like

This is especially true as the tools continue to evolve, and the courses cannot be continually updated to keep up with each new LLM behavior.

1 Like

I also encountered the same symptoms while I had been passing this course then. If I remember those correctly, my, kind of, solutions are different from what you’ve found.

Anyway, glad to see that it’s just not my problem.

These might be also the reason for pursuing iterative prompting.
I suppose that one should be proficient in some set of variations of LLM/SLM by iteration whenever the language model is updated.

@TMosh I have been looking for the timestamp of when a course material was last updated, where can I find it?

Sorry, I do not know where to get that information.