Model evaluation | BLEU | Translation?

In the course it is mentioned that BLEU is used for translation specific evaluation metrics.
I’m not clear what “translation” here means? Language translation or something else. As in the presented content, it showed BLEU scores for same English sentences generated by the model which seems to be similarity scores.

Any clarifications on this?

Yes the BLEU (bilingual evaluation understudy ) in this context means language translation but can also be use to compare similarity as you point out!

Read further on here:

Thanks @gent.spah for clarifying. I thought the translation is from one language to another - I believe its similarity between two sentences, reference and the machine generated one. The “translated” word confused me.

1 Like