Why modifying "hotdog or not hotdog" to "inlier or not inlier" gives a terrible result in L_5 notebook?

Hi
Thanks for great short course: Getting Structured LLM Output.

I have a question when I was experimenting notebook: L5: Structured Generation: Beyond JSON!

so i modified slightly in section Build Your Own Hotdog vs. Not a hotdog

#hotdog_or_not = outlines.generate.text(
out_or_not = outlines.generate.text(
    vmodel,
    sampler=greedy()
)

base_prompt="""
You are being given of an image that is either of a
 inlier
or
 not an inlier
You must correctly label this. Repond with only "inlier" or "not an inlier"
"""

for i in range(1,6):
    image = load_and_resize_image(f"./hotdog_or_not/{i}.png")
    prompt = processor.apply_chat_template(
        get_messages(image,base_prompt=base_prompt), 
        tokenize=False, 
        add_generation_prompt=True
    )
    #print(hotdog_or_not(prompt, [image]))
    print(out_or_not(prompt, [image]))
    display(image)
    print("-------")

Why modifying hotdog or not hotdog to inlier or not an inlier gives a terrible result in L_5 notebook?

It detected an labeled all pics (hotdog and non-hotdog pics except for airplane) whatever I put instead of hotdog like inlier or outlier.

can someone explain me how this Regex-based FSM using LLM treat this inputs so that we get this results?