Concerns about AI reaching superintelligence?

Absolutely (do I start to sound like ChatGPT? …:downcast_face_with_sweat:), but frankly he seems to worry about the wrong things (and also about completely untestable metaphysics like the “simulation hypothesis”, which is of course the domain of scifi fun of the Greg Egan variety and about which nothing useful can be said as there is nothing that indicates what the thing could be that the simulation is running on, which could be anything of arbitrary power)

Anyway, as posted some time ago:

The above I posted in the context of an article on the “Future of Life Institute” safety assessment report: