Hi, in the reading material for KL Divergence in Week 3,
I see this statement (please see underlined in image below)
Earlier in the optional video for PPO it was mentioned that there is a concept of “trust region” that constrains the updated LLM to not be too far from the initial LLM. But now KL divergence is said to be used for that ? Which one is it ?
In this image and what they instruct, the KL Divergence is calculated between the frozen weights original LLM copy and the fine tuned (probably PEFT fine tuned) copy of the LLM!
KL divergence is calculated between these two to see how much the fine tuned model has drifted away from the original model when also trying to please the PPO model.
But my question is that if KL divergence ensures the updated LLM doesnt stray far away from the original LLM, what is the need for the trust region in the policy loss function ?
Had to watch the video carefully, that trust region refers to the inner working of the PPO policy, in order so that it doesn’t move in large jumps and looses its path towards the better goal or human alignment policy.
The KL divergence is between the fine tuned LLM and the original LLM! In that Course it mentions the the PPO may reach a good human optimization of the LLM but it might not make any sense at least grammatically compared to the original LLM!
KL divergence compares old LLM model to new fine tuned LLM model!
Trust region is movement allowed for PPO model!
The ultimate goal is the same, what happening are different things!
Why do we need it? Well it seems the experts who have tested it they needed it! Is it complex? It is, but the whole RLHF, PPO fine tuned LLM system is a complex system with lots of tunings happening!