I like philosophy that engages with the world it lives in, and that has impact not just on academia but on institutions, corporations and public discourse. Philosophers such as Luciano Floridi (an information theorist and ethicist, who has served in an advisory council to Google) and Nick Bostrom (who also earned a PhD in Economics, and whose book Superintelligence was a New York Times bestseller) are redefining what it means to practice philosophy outside the ivory tower.
Lat week I finished reading The Most Good You Can Do, a book by Peter Singer that supports the “effective altruism” movement, a highly utilitarian approach to maximizing the total amount of good in the world and minimizing needless suffering and deaths by sentient beings. In a nutshell, the book argues that if you have a choice between (1) becoming a charity worker and going to a poor and war-torn country to save lives, and (2) becoming an investment banker, saving a substantial part of your income, and giving it to a charity that will use it to send many people to poor and war-torn countries to save lives, then it is the ethical thing to choose the latter, because it is the choice that maximizes total good. The book also highlights a number of resources in the field of charity evaluation, including the Centre for Effective Altruism, an umbrella organization for an evidence-based, analytical approach to philanthropic giving. You have read me quoting Singer before in the context of vegetarianism, or at least decreased meat consumption, to reduce animal suffering (here and here); the scope of this book, however, is much broader and I believe it will appeal to the many more people – hopefully – who care about human suffering. It will also challenge those of us who are proud of their support for museums, for the performing arts, and for other institutions whose funding issues can be called “first-world problems”: a new wing in a museum is extremely poor value, in the effective altruism framework, compared to an intervention that uses the same money to prevent malaria or cure trachoma.
The book gets even more intellectually challenging in the final chapter, “Preventing Human Extinction”, which calculates that – considering not just present lives on Earth, but all future lives that would be annihilated by an asteroid collision with Earth – we as a society should find it very good value to give NASA or some other organization funding ranging anywhere between $100 billion and $100 trillion to be spent on a system able to prevent a catastrophic hit by an extinction-sized asteroid. Here, Singer quotes Bostrom’s definition of existential risk, a situation in which “an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” There are many existential risks, such as nuclear war, pandemics, global warming, and malevolent artificial superintelligence. Singer’s somewhat surprising conclusion is that “it seems that reducing existential risk should take precedence over doing other good things”, including helping people in extreme poverty today, but he argues that causes such as reducing extreme poverty are more likely to attract people to effective altruism than the more abstract cause of reducing existential risk.
The book ends with a counterintuitive – but surprisingly optimistic – point of view on artificial intelligence and its perceived dangers:
Some effective altruists have shown special interest in the dangers inherent in the development of artificial intelligence (AI). They see the problem as one of ensuring that AI will be friendly, by which they mean, friendly to humans […] The replacement of our species by some other form of conscious intelligence life is not in itself, impartially considered, catastrophic […] The risk posed by the development of AI […] is not so much whether it is friendly to us, but whether it is friendly to the idea of promoting well-being in general for all sentient beings it encounters, itself included […] There is some reason to believe that, even without any special effort on our part, superintelligent beings, whether biological or mechanical, will do the most good they possibly can.