Expected Value without Expecting Value

I'm currently teaching a class on "Effective Altruism" (vaguely related to this old idea, but based around MacAskill's new book).  One of the most interesting and surprising (to me) results so
Philosophy News image
I'm currently teaching a class on "Effective Altruism" (vaguely related to this old idea, but based around MacAskill's new book).  One of the most interesting and surprising (to me) results so far is that most students really don't accept the idea of expected value.  The vast majority of students would prefer to save 1000 lives for sure, than to have a 10% chance of saving a million lives.  This, even though the latter choice has 100 times the expected value.One common sentiment seems to be that a 90% chance of doing no good at all is just too overwhelming, no matter how high the potential upside (in the remaining 10% chance), when the alternative is a sure thing to save some lives.  It may seem to neglect the "immense value of human life" to let the thousand die in order to choose an option that will in all likelihood save no-one at all.  (Some explicitly assimilate the low chance of success to a zero chance: "It's practically as though there's no. . .

Continue reading . . .

News source: Philosophy, et cetera

blog comments powered by Disqus