Bostrom's Superintelligence - Does AI constitute an Existential Risk?

The folks at OUP kindly sent me a review copy of Nick Bostrom's new book Superintelligence, exploring AI risk.  It's a topic that lends itself to eyerolls and easy mockery ("Computers taking
Philosophy News image
The folks at OUP kindly sent me a review copy of Nick Bostrom's new book Superintelligence, exploring AI risk.  It's a topic that lends itself to eyerolls and easy mockery ("Computers taking over the world? No thanks, I already saw that movie.") -- but I don't think that's quite fair.  So long as you accept that there's a non-trivial chance of an Artificial General Intelligence eventually being designed that surpasses human-level general intelligence, then Bostrom's cautionary discussion is surely one well worth having.  For he makes the case that imperfectly implemented AGI constitutes an existential risk more dangerous than asteroids or nuclear war. To mitigate that risk, we need to work out in advance if/how humanity could safely constrain or control an AGI more intelligent than we are.Is superintelligent AGI likely? There doesn't seem any basis for doubting its in-principle technological feasibility.  Could it happen anytime soon?  That's obviously much. . .

Continue reading . . .

News source: Philosophy, et cetera

blog comments powered by Disqus