Understanding the training examples creation

W2 - Lab >> 2.1 - Preprocess the Dialog-Summary Dataset
Hello,
As I was going through the week 2 Lab, I observe that the input prompt tokenized for “input_ids” creation is being generated as:

prompt = [start_prompt + dialogue + end_prompt for dialogue in example[“dialogue”]]

This is creating recurring input prompts for every single character in the dialogue.
Is this intentional? When I am actually putting the dialogue instead, the map function doesn’t allow batch operation.
Can somebody please check and help?

Thank you.
Abhishek