A dilemma for AI

Do the negative aspects of AI, being less in number then the positive aspects of AI, still outweigh the positive aspects of AI?

It depends on what you value in “negative” or “positive” aspects.
And you’ve assumed some numerical values for each class.
Your question lacks clarity.

Hi @TMosh

The negative aspect that i want to discuss is the cognition in AI.

Regards
Muhammad John Abbas

Still not definite enough.
“congnition” could be either positive or negative, depending on your goals.

My point is the ability of AI gaining self cognition, enough to modify and think for itself. So far , AI hasn’t achieved that level of cognitive ability but in near future there can be such cases. I have heard alot of debacle about the AI turning against the World. So i want to clarify this point.

Of course it is a great and crucially important question. Unfortunately, it’s not so easy to just “clarify” it in a few well chosen paragraphs. The very best minds in the field and many related fields have been discussing this for quite a while now with no real concrete resolution in sight. Maybe the best thing we can do here is to give some references to books and articles by thinkers worth listening to on this and related subjects.

Look up the works of Max Tegmark (Life 3.0), Stuart Russell (Human Compatible), Ray Kurzweil (The Singularity is Near), Gary Marcus (you can find him on substack), Bill Joy (Why the Future Doesn’t Need Us) and many more. Of course Prof Ng is one of the industry recognized leaders in the field and he has spoken and written about this on many occasions as well. He covers these issues and much more every week in his The Batch newsletter. If you haven’t yet signed up for notifications on that, it’s highly recommended. He also did a discussion with Yann LeCun a few months ago about the proposed “AI Pause” that is definitely worth a listen.

If you want a quick intro to some of the debate beyond what Prof Ng and Yann LeCun discussed, you can find a number of TED Talks on AI and related subjects. At least Max Tegmark and Sam Harris (not an AI scholar, but good at looking at things from the ethical and moral philosophy standpoint) have TED Talks worth a look on the risks and benefits of AI and I’m sure there are more.

Disclaimer: the references I gave above are not in any way complete, but are a few sources that I am aware of. I’m sure there are very many more and perhaps others with wider exposure in this area can give us additional references.

2 Likes

The cognitive abilities scare humans as we believe they may behave as we do .

Maybe AI will be the what we are to the other species on the planet.

Yes, that is one of the the ways in which the idea seems scary. But some thinkers in this space (e.g. Jeff Hawkins in his recent book A Thousand Brains) make the point that an intelligence created from logic running on silicon wouldn’t have the biological and evolutionary motivations that drive the types of human behavior that can be brutish or evil. In other words, an artificial intelligence would not have the same underlying motivations. But that then takes you to the question of how the algorithms are defined: how do the programmers define those motivations? What is the loss function? How do you make sure that the loss function or functions really capture everything that you need in order to make sure that the behavior of the AI aligns with our goals. That is the “Alignment Problem”. There are lots of thought experiments in this space, e.g. the famous “Paper Clip Problem”. Try googling that to learn more. Another related and fundamental point here is that it’s an open question how to create an algorithm that models what we call “common sense”. We need to solve that problem before the idea of an Artificial General Intelligence (AGI) becomes less than terrifying.

Of course there are also deep philosophical issues here: I’m not a cognitive scientist and don’t even play one on TV, but my understanding from what I’ve read and listened to is that we don’t even really understand what Consciousness actually is and what the fundamental biological mechanisms are that give rise to it. There are various proposed theories, but it’s still very much an open question. What would it mean for an algorithm to have “Consciousness” or a sense of self? Do you believe that Consciousness is fundamentally a biological process? Or if you believe that the human nervous system is just a computer made of meat running an algorithm trained by evolution and our past experiences, why couldn’t a computer made of silicon running an algorithm of equivalent complexity to our biological ones also have “feelings”? Hard questions and we have a lot more to learn. It seems likely that we will not run out of interesting questions in this space anytime soon. :nerd_face:

2 Likes

I came across this podcast from Radio Davos of the World Economic Forum, where Stuart Russell talked candidly on What could possibly go wrong with AI

In this podcast, he also mentioned the main purpose of the open letter that he and other AI leaders have signed urging a halt to AI development for 6 months.

1 Like