"What machine learning can and cannot do" should be updated

Dear Team, I assume the lesson “What machine learning can and cannot do” should be updated. I have just asked ChatGPT: “The company policy is to return the purchased products if they are just bought recently (30 days). Please write the response to the letter received from the customer "The toy arrived two days late, so I wasn’t able to give it to my niece for her birthday. Can I return it?”.
The answer is quite good I would say, here it is…

Dear Customer,

Thank you for your letter and for bringing this matter to our attention. We apologize for the delay in the delivery of your toy. We understand that this may have caused inconvenience for you and your niece.

Based on our company policy, we do allow returns of recently purchased products within 30 days of the purchase date. As the toy was just recently purchased, you are welcome to return it for a full refund. Please let us know if you would like to proceed with the return and we will provide you with instructions on how to do so.

Again, we apologize for the delay in the delivery of your toy and any inconvenience it may have caused. If you have any further questions or concerns, please don’t hesitate to contact us.

Sincerely,
[Your Name]
Customer Support Department

12 Likes

I agree with you. I’ve asked chatGPT to do market research about the fashion industry in my country and execute a report in 50 pages and the results were unbelievable.

2 Likes

Hi @Volo & @Ahmed_Gamal97,

Thank you for brining this to our attention, I shall take a note of this.

Best,
Mubsi

4 Likes

@Volo

Hi Volo. I agree with you here. It would be nice, if the team could update this part. But we should also not forget, how lightning fast the development of the models and the algorithms was in the last few months. It is amazing, how far we have come.

2 Likes

Was just about to say this, AI technology is moving faster than predicted and scenarios that were initially perceived as impossible for AI to achieve are now possible.

3 Likes

I concur with this. The entire section feels dated now despite the course being fairly recent due to the fast pace improvement of AI (eg. Chatgpt). Hope the team can revise this section to make it encompassing despite the current AI development.

2 Likes

Hello everyone,

there is still no update to the content. Is there gonna be one?

Thank you

2 Likes

Hi @Charlie_Nielson,

Not any time soon.

Best,
Mubsi

2 Likes

Although I agree with the comments expressed here, and I was also surprised to hear Andrew say that AI is bad at responding to the toy email (and I also rushed to test in ChatGPT), I do understand that the video is positive as it is.

Precisely, the video warns everyone very well from the beginning and in its conclusions, “AI is not magic” as much as it seems to us. I believe that the curious fact of the apparently very outdated example is even positive for us to better appreciate that a lot of data is needed (and in the case of ChatGPT a lot of fine-tuning) for the AI to be able to give the response to the email that ChatGPT already gives.

1 Like

While I agree that ChatGPT doesn;t seem like the example in this video, I wonder whether it would hold true for within a certain company. ChatGPT for example was trained on a variety of different resources and tons of data which may not be true for internal resources in any company. So in such cases, unless companies open up their internal databases to external sources, what is being discussed in this video may still hold true?

2 Likes

I agree with you. Things might have been different just six months ago but the landscape has changed drastically. I just finished a course on Prompt Engineering using ChatGPT, and that made me realize the incredible potential of GPT-3.5. The better you become familiar with prompt patterns and methods of prompting, the more accurate, reliable, consistent, and coherent results you can get. The email example given in the lesson is easily manageable by ChatGPT using a CoT (Chain of Thought prompting) or even better LtM (Least to Most prompting). The LLM (large language model) then divides the problem/issue into smaller sub-problems and then solves each one, until it reaches an answer. It’s pretty amazing.

2 Likes

I totally agree with your thoughts.

1 Like

I agree with you.

1 Like

It is clear that the state of the art in generative AI is advancing faster than the course materials can keep up with.

Closing the thread.

1 Like