Artificial Emotion

Hi everyone! I’m Deiadora.

I’m curious if anyone here has been following the trends in artificial emotion (AE)?

I’m interested in ethical and emotionally intelligent language models and how to create them. My specialty is in AI content design, an up-and-coming function that focuses on the end-user experience using a host of skills including content strategy, mega prompt engineering, RHLF, information architecture (ontology, taxonomies, and metadata), content modelling, and more.

It started when I led a project at Airbnb last year to make the chatbot more personable to each user. After this success, I’ve gone on to work on a few more projects. I would love to find like-minded collaborators and colleagues here.

Kind regards,

@Deiadora I have not studied this area myself, but I suppose you could build something of at least a very basic engine by starting with words that have similar contexts/tone/‘mood’ (as well as certain phrases that fit this bill) and dividing them up by their semantic word vectors/embeddings into various categories (happy, sad, mad, etc).

Also making sure you train on categories of text or sentences that have a certain consensus as to what emotion the statement entails.

These are some some rough thoughts though, I do not have experience in this area.

However, as a life long lover of literature, particularly poetry, and at times I rather emotive/well read person myself…

I also feel emotion can be very tricky. I mean some people will just more or less blurt out, or at least explicitly express what they are feeling-- Thus it is not even so much direct, or even obviously in the specific words they use, but the ‘implication’ of that speech-- Which in many cases may relate to causes, goals, (mis)directions that are not even contained within the speech or expression that one relates–

It is that panoply of side cases that I think would be much harder to quantify.

@nevermnd These are excellent ideas! Thank you for sharing! What is your role/function in AI?

Right now, I’m building a content strategy for an AE/AI solution that was inspired by Richard Rudd’s Gene Keys and would love collaborators and colleagues to connect with as I develop it!

I am just a student.

But I can say, since I recently just completed it, (and you really need to go through all the classes first to understand what is going on there), you might like the ‘Emojify’ assignment of the Deep Learning Specialization, course 5 - Sequence Models, Week 2 where we learn to predict the proper ‘Emoji’, automatically, based on a pre-trained (supervised [i.e. labeled]) subset of text.

Two things are notable to me though here-- There is a footnote at the end of the lab that this was developed by ‘Alison Darcy and the Woebot team’-- (I have no idea whom that person is).

But if you click on the link for the site, I think it has apparently gone from being a ‘curiosity’ to a major company.

I kind of don’t feel comfortable expressing my personal feelings here, or to anyone I don’t know, but when I was a Senior Lecturer at Northeastern University for a number of years, fresh off their Jeopardy! Championship, I got to hear talk (and meet) David Ferrucci whom was at the time the head of the IBM Watson team at the time.

And in front of a large crowd I outright asked him ‘well do you think your system can understand poetry’ ? I think he kind of brushed it aside and in the wave of hype said ‘Yes’. But I’ve met poets that are from Chairs at Harvard, to even (later) Nobel Prize winners.

In my mind, a really, really, good poem, is both comprehensible-- But has no previous preference or correlation in ‘common language’.

Even in the LLM’s of today, you’re never going to ‘discover’ the perfect outlier-- That is neither how they are designed, or even structured to work.

If this is not clear, don’t ‘do this’:

Citation: Google’s AI Overviews misunderstand why people use Google | Ars Technica

I mean I study this technology still deeply because I think it is really interesting and really could help certain very specific questions we are challenged to understand/see by ourselves…

Anyways-- I am going to end my input here. But if you continue through the MLS and DLS specializations, it does come up.

Hi again! I’m going through the Specializations too. Great, I’ll check out the Emojify assignment when I get to it. Thanks for the insight!

The world of Artificial Emotion is really fascinating. I think as we develop AI and learn how to better train and tune models, having frameworks for ethical emotional intelligence-based responses will become more and more pertinent.

It’s a fascinating field to get into right now. I’d imagine you’d have a deep understanding of the subtleties of human emotion with your background in poetry.


I am really surprised after read thing not because of the Artificial emotion, but i was thinking about this a couple of days before when i was thinking about an AI partner for human.

and an idea strike to my mind as we do in reinforcement learning we can do this here also but in place of choosing the outcome with greater result we can make it to react in a way happy (if it have a positive incentive) or sad(if it have a negative incentive)

I think this was a stupid idea
or may be, the sol…

Hi @arihant.18! It is a whole world of technological innovation. We are doing our best to make AI emotionally intelligent and ethical…as human-centric as possible. Your ideas are right in line with where we are headed.

Emotionally intelligent though would mean that if a user came to the AI solution (say for therapy or mental health), the AI would respond in an empathetic way to help to lift up the user with mindfulness strategies, inclusivity, and understanding. That’s what would make it emotionally intelligent! :slightly_smiling_face:

Okay now i got that, previous i took it in other way. I was thinking that we will integrate emotion in machine to make them a company for the peoples.

but in your case you guys are developing AI for the purpose of therapy kind of thing that is really cool and inspiring. I wish you luck from my side.

@arihant.18, great. I’m glad that made sense!

That’s correct. Using it for therapy, personal, and professional development. I appreciate it! Best of luck to you too.

1 Like