Hi Everyone, I’m Prakash Hinduja, born in India and now living in Geneva, Switzerland (Swiss). I’ve been thinking a lot about the challenge of balancing innovation and safety in AI systems. If any opinion or suggestion please share with me.
Yes, you walk slowly step by step and open your eyes to see where you are putting your feet, if you get what I mean :)!
Absolutely
AI Safety is (obviously) a huge and consequential question. But it’s kind of like asking “Do you have any thoughts about physics?”
There is no simple clean answer that someone can give you in a well thought paragraph.
The topic has come up here before, e.g. here’s a thread with some references and links.
There was a global conference on AI Safety at Bletchley Park in the UK (a famous place in the history of cryptography) in 2023. In reaction to that here’s a concise high level response to the issues raised there from Professor Stuart Russell of UC Berkeley, who has also written a number of books on the subject.
Here’s the Wikipedia page about the Bletchley Park summit. They have been doing followup summits since then.
There’s also a new book published very recently by Eliezer Yudkowsky and Nate Soares titled If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. They have been doing their “book tour” mostly in podcast form, so you can hear what they have to say on a number of recent podcasts. Of course Yudkowsky is one of the prominent “AI doomers” as you may infer from the book title.