"Ethical use" of recommender systems


I find the comment of Andrew Ng in the lecture “Ethical use of recommender systems” about “filtering out problematic content” quite disturbing.

Of course, anything that is illegal has to be filtered out. This is not an ethical issue, but a legal one.

But who decides what is a “conspiracy theory” or what is “misinformation” or even what is “hate speech”? Is it “hate speech” to call Fauci a liar, because he claimed that the vax prevents you from infection and from infecting other people (what he did and what has turned out to be simply false information)?

Do we really want computer scientists or social media companies to make this decision for us? If anybody really followed the content of the “Twitter files”, he should really be in doubt that the collaboration of government and social media that has taken place in the last years is really anything good for society. However, I do not doubt that it is good for the people in power, for the billionaires and for big pharma and big tech. But it was in fact anti-science and anti-human rights and anti-free-speech. So what is good and what is bad and who decides?

Andrew should at least have mentioned that it is very concerning to “filter out” content, which is per se violating essential aspects of democracy like free science and free speech. It is not Antony Fauci and not the FBI and not Twitter or Facebook who should decide about such things, because if that where the case, we cound end up in a Chinese model of society quite soon.

Andrew, in this perspective, seems to have taken over the “establishment perspective”, which has brought big harm over our societies in the last years.

Just my 5 cent.

Hi there,

thank you for your feedback!

From a AI development perspective, I believe in the design and implementation process you can contribute to a solution that is not biased following best practices and validate that with data!

However, decisions about what counts as ground truth in controversial discussions do not appear to be a decision that falls under a developer’s responsibility. A good leadership team and product management should provide orientation and take governance into account in addition to strategic goals.
Note: as an AI engineer or developer (that are both in my understanding addresses of the MLS specialisation) you might be able to come up with proposals backed-up w/ Data to prepare high quality decision proposals or policies resp. e.g. a smart labelling strategy, considering e.g. trusted parties or so.

Also, out of curiousity you can also check how other companies deal with misinformation or misleading info.

Hope that helps!

Best regards

Hi Christian,

yes you are of course right - as a newbie in the field and in the role of a developer or engineer, I will probably not be a situation where I have to decide about “ground truth” questions.

However, if I understand Andrew correctly, ethical questions also should be considered when “choosing my job”. So, I might be in a situation when I have to decide if the philosophy of the company that might pay me soon is in sync with my ethical position or with what I think might be harmful or helpful for society or humanity.

From that perspective, my thoughts described above were just an addition to what Andrew said in the course. I just wanted to point out that filtering of “possibly harmful” content might itself be harmful, because Andrew somehow ignored that aspect.

Best regards