What are the moral implications of intelligent AGI? [excerpt]

The possibility of human-level Artificial General Intelligence (AGI) remains controversial. While its timeline remain uncertain, the question remains: if engineers are able to develop truly
Philosophy News image
The possibility of human-level Artificial General Intelligence (AGI) remains controversial. While its timeline remain uncertain, the question remains: if engineers are able to develop truly intelligent AGI, how would human-computer interactions change? In the following excerpt from AI: Its Nature and Future, artificial intelligence expert Margaret A. Boden discusses the philosophical consequences behind truly intelligent AGI. Would we—should we?—accept a human-level AGI as a member of our moral community? If we did, this would have significant practical consequences. For it would affect human–computer interaction in three ways. First, the AGI would receive our moral concern—as animals do. We would respect its interests, up to a point. If it asked someone to interrupt their rest or crossword puzzle to help it achieve a “high-priority” goal, they’d do so. (Have you never got up out of your armchair to walk the dog, or to let a ladybird out into the garden?) The more we judged that its. . .

Continue reading . . .

News source: OUPblog » Philosophy

blog comments powered by Disqus