Exploding gradients in rnn

Since we do gradient clipping in order to overcome the problem of exploding gradients ,so wouldn’t the information will be lost in doing so?

Gradient clipping might cause the convergence to be slightly slower.