Two doubts about L1: Module 2 - LLM, Chat Format and tokens

In L1:
1 - Why the result is correct, The capital of France is Paris. In the explanation, they told that as LLM are based on predicting a new word every time, you could found What is Frace population…What is France capital population…and things like this…and not the correct answer…

## Prompt the model and get a completion

response = get_completion("What is the capital of France?")

print(response)

The capital of France is Paris.

2 - Executing the code:

response = get_completion("Take the letters in lollipop \
and reverse them")
print(response)

The reversed letters of “lollipop” are “pillipol”.

So the answer is correct not as shown in video that result is a incorrect answer.