I have recently finished the videos section of Week 1 in NLP using Attention Models and have a doubt regarding MBR. If I get it right, MBR takes several candidates using random sampling, compares similarity of each candidate with the other, takes average of it and does the same keeping other random candidates as reference to get the best output. In simple words, we are comparing random outputs and getting the most familiar one. But there is no comparison with an actual source of truth, like a reference here. So how will we get a good output by comparing random outputs, whose similarity to the ground truth is known? In Beam search, we had metrics to look at how sane our sentences were using probabilities
@arvyzukai Can you please help me with this?
I doubt I can explain better than it is explained in the video and I’m not sure where your confusion lies.
In simple words, Minimum Bayes Risk (MBR) is a method to select one of the translations. For example, if your model generated candidates [“I like learning”, “I love learning”, “I really love learning”, “I adore learning”, “I just love learning”] for the “ich liebe es zu lernen” (German version of “I love learning”), the question is which translation is the best?
We don’t know the ground truth, maybe even humans would disagree which is the best. So, one of the ways to decide this is the MBR method, which compares these 5 candidates with each other and decides that the one, which is most similar to other 4, is the best.
Also, I’m not sure you understand, but just to be sure, these candidates are not completely random - they are just a bit random (temperature parameter regulates how much “creative/random” the model can get).
Got it. Guess I didn’t factor in the impact of temperature in this. Thanks!