“Computers are already smarter in some narrow dimension”. Lol, the biggest risk for using “automation” is that we will automate out any need for us to be intelligent to begin with. The argument is very powerful and boils down to - by reducing our need to learn different activities and overcome struggle we simply BECOME lazy and loose motivation to learn. By far, the evidence is out there. The counter argument that we will “discover” better things to do clearly conflates “intelligent” and “better” as clearly better things are things requiring less effort.
AGI as a goal and “hopeful” target is another mystery. Why are we setting this bar? What are we possibly trying to achieve by doing so? Even now, people misinterpret the marketing term “intelligence” and already have anxieties going far beyond definition of potentially harmful technology. It is a technology that removes motivation to be intelligent and will equate people to machines. Are babies intelligent? Not really.
BTW, 1970s people showed that “reactive” model lacking “reflection” is solely inadequate to represent even the basic social being and yet we still claim that solving “knowledge” task can let a thing being “intelligent” on the “human level”.
I think we should stop pursuing illusion of personal grandeur and focus on practical application of human augmentations like a good engineers should.
Waw, the “dimensions of responsible AI” is another “hot” discussion baloon.
Look, “bias” is a bases of intelligence even on the current primitive reactive “frog” scale. So, the point is clearly about morally acceptable biases.
However “ethical use” clause clearly states that the owner of the system is free to define morality based on what his/her/it definition of “beneficial purpose” is.
Beneficial for whom? Does it not mean “having the most utility”? Would not goal of “maintaining beneficial number of people on earth” fit right this set of rules? Some people clearly think that 200M people is too much… That is the problem with any weapon regardless if it is used against our bodies or our drive to learn and find what means to be human (not AI).
Would you consider changing this section? Discussing this will make no change based on the presented definition. That is, we will end up with our own understanding of “beneficial”.
Not sure what are you afraid of? Becoming a holy little cow is not a completely bad scenario given the alternative of becoming a cannon fodder that is currently placed for young generation by not so smart aging people. It is not a bad life we can live in a present moment, but our abilities to design collective future never were sharp enough to guarantee survival. With “powerful” AI you can have churches of AI giving hope people, with God directly responding in a messenger with holy quests. There are many gameplays that are possible with AI out there, but without it, I seriously doubt we can stop the culling that happens by itself every 100 years.
Irony is not a solution. Blank brain is not intelligence. Cannon fodder will be more profound when opponent will roll out real time fully automated cyber systems. People time and time again shown ability to cooperate when left alone from “good intended” and poorly thought through experiment promoters. We do not even have acceptable definition of intelligence of our own, let alone creating “artificial” one.
You seriously think that machine devoid of any reflection including responsibility, sense of pain, compassion and pretty much anything else that allowed us to survive for so long will make OUR life better? Using the words like “culling” (who again did that and to whom?) you just expose the vicious ideology that is based on following premises
- People are sinners incapable of decision making
- People are doomed and will inevitable be culled (since see above)
- Then the “chosen” people must come up with solution that will tell others how to live and strip them of “ill” will.
This will end up badly as it has happened before and “Artificial” and “Intelligence” has nothing to do with this ideology.
The place of Machine Learning is to augment human intelligence with abilities that we do not otherwise possess and not to “replace” us with “better” beings or “comfort” or “save” us!
I’m going to temporarily close this thread, so that everyone can take a moment to review the Code of Conduct.