Train the PERT adapter

Hello, thanks for providing such great lessons.
I am trying to understand the following code for training the PERT adapter.
Would like to ask why in the Trainer function we only provide the train_dataset? I noticed that for the full fune-tuning part, it requires both train_dataset and eval_dataset.

output_dir = f'./peft-dialogue-summary-training-{str(int(time.time()))}'

peft_training_args = TrainingArguments(
    output_dir=output_dir,
    auto_find_batch_size=True,
    learning_rate=1e-3, # Higher learning rate than full fine-tuning.
    num_train_epochs=1,
    logging_steps=1,
    max_steps=1    
)
    
peft_trainer = Trainer(
    model=peft_model,
    args=peft_training_args,
    train_dataset=tokenized_datasets["train"],
)

You can add it like so:

peft_trainer = Trainer(
model=peft_model,
args=peft_training_args,
train_dataset=tokenized_datasets[“train”],
eval_dataset=tokenized_datasets[“validation”],
)

If not added, no eval will be done, but I tihnk it is a good practice to do it. It may have been just missed in the code.