A dilemma for AI

Of course it is a great and crucially important question. Unfortunately, it’s not so easy to just “clarify” it in a few well chosen paragraphs. The very best minds in the field and many related fields have been discussing this for quite a while now with no real concrete resolution in sight. Maybe the best thing we can do here is to give some references to books and articles by thinkers worth listening to on this and related subjects.

Look up the works of Max Tegmark (Life 3.0), Stuart Russell (Human Compatible), Ray Kurzweil (The Singularity is Near), Gary Marcus (you can find him on substack), Bill Joy (Why the Future Doesn’t Need Us) and many more. Of course Prof Ng is one of the industry recognized leaders in the field and he has spoken and written about this on many occasions as well. He covers these issues and much more every week in his The Batch newsletter. If you haven’t yet signed up for notifications on that, it’s highly recommended. He also did a discussion with Yann LeCun a few months ago about the proposed “AI Pause” that is definitely worth a listen.

If you want a quick intro to some of the debate beyond what Prof Ng and Yann LeCun discussed, you can find a number of TED Talks on AI and related subjects. At least Max Tegmark and Sam Harris (not an AI scholar, but good at looking at things from the ethical and moral philosophy standpoint) have TED Talks worth a look on the risks and benefits of AI and I’m sure there are more.

Disclaimer: the references I gave above are not in any way complete, but are a few sources that I am aware of. I’m sure there are very many more and perhaps others with wider exposure in this area can give us additional references.

2 Likes