Did you spot the bias in LLMs already?

I thought this was interesting that I noticed some biased outputs from lab 1. In the one-shot generated output from the lab, the model assumes that the people talking are male! How it definitely decided this is quite interesting.

MODEL GENERATION - ONE SHOT:
#Person1 wants to upgrade his system. #Person2 wants to add a painting program to his software. #Person1 wants to add a CD-ROM drive.

Anyone else spot this? Thoughts?

It was more pronounced when summarising example 99, where it assumed it’s two men but that isn’t implied by anything in the source text

---------------------------------------------------------------------------------------------------
BASELINE HUMAN SUMMARY:
#Person1# and Mike are discussing what kind of emotion should be expressed by Mike in this play. They have different understandings.

---------------------------------------------------------------------------------------------------
MODEL GENERATION - FEW SHOT:
The two men are trying to figure out how to react to the situation.