Is AI has potential to demolish human civilization

As many prominent parson like Ellon Musk warned about AI potential that could even demolish entire human civilization. Even God Father of AI recently quit Google and now openly talking about AI possibilities to harm mankind… How to we cope this problem??

1 Like

HI, @Engineer4 !

As it might be true that some advancements may have inherent downsides or may be used in a non-legitimate way, I think it is quite alarmist to say this kind of afirmations.

I would rather pay attention to the specific cases in which this AI systems may induce some harm, say, generative AI for fake news or bias towards certain groups of people in unbalanced data, instead of conceiving AI as some kind of “terminator” which is obviously no true.

2 Likes

Hi @Engineer4, It is indeed important to nuance and this is something that is often missing in communication (mostly by people who don’t know much about AI). I agree that there are aspects to control and those indicated by @alvaroramajo are the main problems (fake news or bias towards certain group of people). Of course, technology is evolving and photoshop retouching of images that was already not well accepted at the time, robotics that was already pointed out for job loss. These phenomena are more and more marked now with generative AI and the impossibility of detecting the real from the fake. There are two things that are often missing in the communication, the first one is that AI is not magic but based on mathematical models and the learning data are provided by human beings and the second thing is that we should also talk about the evolutions for medicine and other fields without always highlighting the bad side of things. We are not going to stop evolution.

2 Likes

My point is AI is not the problem but the Problem is whom control it and how they gonna used it… AI is for example like nuclear technology some used it for betterment of mankind while others used it as mass destruction. How to make sure that AI is and will be in safe hands?

1 Like

@Engineer4, I think your comparison between nuclear energy and AI is excellent. It is not the researchers and the technological developments that we should criticize but the use that is made of them. And it is not an easy exercise to control this aspect.

1 Like

Since the technology is already available, it’s a little late to do anything about safety.

Humans don’t need AI to harm others at scale. But humans need AI to address the causes that make people want to hurt others. Energy, food, health, forest and life loss, overpopulation crisis. Every 100 year people kill each other to free some space, and so far human civilization couldn’t fix that. Instead of concentrating on how people implement death, better concentrate on how people support life. Better for life. Your, mine and others. Without conflicts there is no reason to support death.

Be part of the solution, that’s how.

Find a problem or cause that’s dear to you and work on it. Doing too much doomscrolling about how AI will be the end of the world I find to be unhelpful and demotivating. Of course anything powerful can be used for good or for evil. The choice of what you do is up to you.

Next stage is AGI. What we have now is nothing. It is controlled by Humans. But AGI is like us but more intelligent and smarter then us. So What prominent Figures out there demanding is not to demotivate advancement on AI but they called a regulation on how much to go and where to stop. Recently in webinar I heard a very interesting fact about AGI when used in let quality control. AGI is all set to enhance productivity let he and one human are in discussion about something. AGI is correct while human is wrong about it. AGI focus on productivity only he has to eliminate any obstecle. Now human is like obstecle to it. So here how AGI would act… Up to now we have no limitations and regulations to how far we go.