has anyone succesfully run the notebook from the recent DPO workshop?
I am having issues with the DefineDPOTrainer step, specifically they are using params beta, and max_length in the TrainingArguments init which dont seem to be valid:
from transformers import TrainingArguments
path where the Trainer will save its checkpoints and logs
output_dir = ‘data/zephyr-7b-dpo-lora’
based on config
training_args = TrainingArguments(
bf16=True,
beta=0.01,
do_eval=True,
evaluation_strategy=“steps”,
eval_steps=100,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
gradient_checkpointing_kwargs={“use_reentrant”:False},
hub_model_id=“zephyr-7b-dpo-qlora”,
learning_rate=5.0e-6,
log_level=“info”,
logging_steps=10,
lr_scheduler_type=“cosine”,
max_length=1024,
Trainer
"In this workshop, Lewis Tunstall and Edward Beeching from Hugging Face will discuss a powerful alignment technique called Direct Preference Optimisation (DPO) which was used to train Zephyr…