What are the major changes in result if we use gpt
As GPT is not an encoder-decoder model, which might make it less ideal for generating summaries. In lectures, it has been highlighted that decoder-only models are frequently employed for text generation. Larger decoder-only models have demonstrated robust zero-shot inference capabilities and can excel in various tasks. If you want to assess GPT’s performance in text summarization, you can experiment with a simple approach mentioned on the Hugging Face website, where you append “TL;DR” at the end of the input text to have GPT-2 generate summaries.
FLAN-T5 is open-source, for this reason here using FLAN-T5 that I think so.