Is AI a real opportunity for mankind or another potential danger to our society?

We all agree on the benefits of AI and how it allows to make substantial changes to our daily lives. However, one may wonder if the excessive development of that technology may lead to situations where it may turn against mankind. What if this technology falls on wrong hands, becomes out of control and threaten the future of humanity in its actual existence.
Do not get me wrong. I am not trying to encourage prospective users to fear AI. But, as human beings, we have to put in place mechanisms that will prevent us from losing control of that amazing technology.

1 Like

Havent you heard the expression “with that lighter you cook your meal or you can burn your house too”. This is the threat of all technology to become missused and the way to avoid it is to make humanity more understanding, more concious of whats happening and this goes beyond AI and any technology.

1 Like

It is a good point that this same observation applies to pretty much any technology. You can use it for its intended benign purpose or you can intentionally misuse it and cause various types of harm. But there’s another level of complexity with something like AI: there is also what you might call the “Sorcerer’s Apprentice” problem where your intentions are benign, but the technology “gets away” from you and has harmful effects that you honestly did not anticipate or intend. People refer to this as the “Control Problem” or the “Goal Alignment Problem” or the “Unintended Consequences” problem.

This is a deep and complex area for discussion. There is a huge amount of prior art in this space. Many people who are involved in the technology, including Prof Andrew Ng, have thought deeply and spoken often on this and other related topics. There are lots of ways to go deeper here. One would be to subscribe to Prof Ng’s “The Batch” newsletter, which frequently mentions topics relevant to the ethics of AI and prevention of unintentional harm.

The Stanford Human Centered AI Institute, founded by Prof Fei-Fei Li is doing a lot of work in this space. Take a look at their website and the resources available there.

Here’s a discussion between Prof Li and Prof Ng about AI in Medicine, in which the risk and ethics issues figure prominently.

Here’s an interview with Prof Stuart Russell of UCB on this topic. He’s also written several books on the subject, most recently Human Compatible: Artificial Intelligence and the Problem of Control.

Other well known scholars who have thought a lot on this subject include Nick Bostrom, Sam Harris, Max Tegmark and many others. If you search TED, you’ll find talks by all those people and more that relate to the questions here.

Another influential thinker in this space is Ray Kurzweil whose books The Age of Spiritual Machines and The Singularity is Near were widely read and discussed.

And on the general subject of Unintended Consequences, the famous paper Why the Future Doesn’t Need Us by Bill Joy is also thought provoking.