I’m new to the AI for Everyone course and it’s a great introduction. I have a hunch that this information in the week 1 video isn’t still accurate with the launch of Chat GPT?
Min 3:40: ‘Here’s something that AI today cannot do which is if you want the AI to input an email and automatically generate, it responds like, “Oh, sorry to hear that. I hope you’re niece had a good birthday. Yes, we can help with, and so on.” So, for an AI to output a complicated piece of text like this today is very difficult by today’s standards of AI and in fact to even empathize about the birthday of your niece, that is very difficult to do for every single possible type of email you might receive.’
Yes, I think you have a good point there. That’s the thing with AI and ML: the state of the art is evolving all the time. I’m not sure exactly when AI4E was published, but I believe it was at least 2 or 3 years ago. The last couple of years have seen a very significant improvement in the performance of “Large Language Models” like GPT-3 and now ChatGPT, which is based on GPT-3. I’m sure you’ve seen lots of examples of ChatGPT output on the web, but one that especially impressed me was this letter to a child explaining that Santa Claus doesn’t really exist. It was actually quite lovely and touching and really sounded like something a parent would aspire to say in that instance.
To go further, right at the beginning of the course, the professor states that AGI is so far away as to not be worth discussing at all. This makes me question the relevance of what the professor is saying for the rest of the course, because GPT and similar LLMs are quickly approaching AGI…
There is a fair amount of debate on that point and some people in the field (Yann LeCun included) have said that LLMs are just an offramp on the road to AGI.
Speaking of Yann LeCun and Andrew Ng, they just announced a special event which will be a discussion between them about an important current issue about AI Safety and LLMs.
I wouldn’t say that’s an accurate description of modern LLMs. I’d say they’re approaching being able conversationally ‘mimic’ an AGI, given the appropriate prompts from a trained user. Even then no one would say these models are doing anything close to ‘thinking’ or ‘understanding’ their mechanisms and outputs to the degree of a true AGI.
Hence how the current model can still trip over basic math or be tricked into ‘thinking’ 2+2 = 5
Exactly. Mimicking something that sounds intelligent does not constitute AGI. If you listened to the discussion earlier today between Yann LeCun and Andrew Ng about LLMs and AI Safety, one of the points LeCun made is that the current generation LLMs are not capable of planning: it can only repeat a version of a plan that was part of its training set. It may sound clever in some circumstances, but I hope we can agree that does not count as being capable of “general problem solving”.
Me too: I felt the course was not up to date. It is very important to know whether or not the professor still believes what he was saying in the week 1 video. It would be great if the professor could clarify if the videos are outdated or still up to date.
Thanks to you and others here for the info about the discussion - I’ll definitely check that out as I am curious to know how Mr Ng’s position may be changing given current state of the technology and how rapidly it seems to be advancing.
As discussed earlier on this thread, the material in the course lectures is at least a couple of years old at this point. Given how rapidly the state of the art moves in AI/ML, they may not view updating the lectures frequently as a practical way to address that. A good way to keep in touch with Andrew Ng’s current thinking in this space is to subscribe to his weekly newsletter The Batch, which is also available here on the DeepLearning.AI Forums.
Thanks for your advice that I will follow. however if the course is outdated it is an important issue because some people may not check this forum and newsletter and just keep in their mind what was said in the course video, which is outdated or not…
Just generated an automated response by ChatGPT. . This course is useful though as it will bring us up to speed on pre chatGPT era and then we can catch up.
Certainly, here’s an example response you could use to approve the refund request:
Dear [Customer Name],
Thank you for contacting us about the delayed delivery of your toy. We apologize for any inconvenience this may have caused you. We understand that you would like to return the toy for a refund, and we are happy to assist you with that.
After reviewing your request, we have determined that you are eligible for a refund according to our company’s refund policy. We will process the refund as soon as we receive the returned toy, and the refund should appear on your original payment method within [timeframe].
We want to make sure that you are fully satisfied with your experience with us, and we appreciate your business. If you have any further questions or concerns, please don’t hesitate to reach out to us.