Why use KL divergence in PPO?

When creating a post, please add:

The PPO algorithm uses clipping to control the policy from changing excessively.

And KL divergence also plays a role related to policy.

Why should use KL divergence when PPO uses clipping?

Hello, please check this post also if you may: