I am just a student.
But I can say, since I recently just completed it, (and you really need to go through all the classes first to understand what is going on there), you might like the ‘Emojify’ assignment of the Deep Learning Specialization, course 5 - Sequence Models, Week 2 where we learn to predict the proper ‘Emoji’, automatically, based on a pre-trained (supervised [i.e. labeled]) subset of text.
Two things are notable to me though here-- There is a footnote at the end of the lab that this was developed by ‘Alison Darcy and the Woebot team’-- (I have no idea whom that person is).
But if you click on the link for the site, I think it has apparently gone from being a ‘curiosity’ to a major company.
I kind of don’t feel comfortable expressing my personal feelings here, or to anyone I don’t know, but when I was a Senior Lecturer at Northeastern University for a number of years, fresh off their Jeopardy! Championship, I got to hear talk (and meet) David Ferrucci whom was at the time the head of the IBM Watson team at the time.
And in front of a large crowd I outright asked him ‘well do you think your system can understand poetry’ ? I think he kind of brushed it aside and in the wave of hype said ‘Yes’. But I’ve met poets that are from Chairs at Harvard, to even (later) Nobel Prize winners.
In my mind, a really, really, good poem, is both comprehensible-- But has no previous preference or correlation in ‘common language’.
Even in the LLM’s of today, you’re never going to ‘discover’ the perfect outlier-- That is neither how they are designed, or even structured to work.
If this is not clear, don’t ‘do this’:
Citation: Google’s AI Overviews misunderstand why people use Google | Ars Technica
I mean I study this technology still deeply because I think it is really interesting and really could help certain very specific questions we are challenged to understand/see by ourselves…
Anyways-- I am going to end my input here. But if you continue through the MLS and DLS specializations, it does come up.