Week 1 What LLMs can and cannot do - Context limitation

In this video, it’s mentioned that LLM has context limitation which is the total input and output length. But at opening video, Andrew mentioned that he used Generative AI to read his friend’s article and summarize it into short paragraph. So how is he able to use LLM on a long article? Did he have to copy paste the content from the article page after page into LLM web interface and then copy and paste the output together?

Most LLMs that were trained on Internet data have a cut off date. For instance, it can be trained on internet data just upto June this year. This means that if you ask about something that happened in August, it wouldn’t know. Hence the analogy with a fresh college graduate with no access to the Internet.
However, nowadays they equip the chatbots with tool calling that enables them browse the internet to obtain information that is not in their training data.

1 Like

Thanks for the reply! But I think this is a reply to another question I posted, not for this question post. :sweat_smile:

Sorry, my bad. I was on my phone :slightly_smiling_face:
Nope. There are different ways to go about this. He could have supplied the article as a document to the LLM, or he could ask the LLM to read directly. Retrieval Augmented Generation (RAG) is a way to supply external documents as inputs to the LLM.
Nowadays, the LLMs are having bigger and bigger context windows and the possibilities keep expanding.