Where is the training for "Question answering" part?

Hi all,

In this week (NLP week 3), we have learned about Question answering, but where is the part of “question answering”?

What I only see is predicting words with the encoder-only model (especially BERT), like creating word embedding. So how do we do “Question answering”. We just load pre-trained embedding from the beginning, then input a pair of (context,question) with the answer as the label, then train as the other generative task, right?

Thank all.

Yes I think you are right, the question and answer is in this context, you make an input and then you get a text prediction.