Reviews

The Precipice: Existential Risk and the Future of Humanity by Toby Ord

lizardcha's review against another edition

Go to review page

adventurous dark informative inspiring reflective medium-paced

4.0

johnwean's review against another edition

Go to review page

challenging hopeful informative inspiring reflective medium-paced

4.25

funsized327's review

Go to review page

challenging dark hopeful informative slow-paced

2.5

sophie_pesek's review against another edition

Go to review page

3.0

Part 2 of my longtermist reading (so I can better refute the more out-there angles). I liked this more than "what we owe the future," and it was actually pretty clearly written. Still disagree with especially the final sections on space travel and exponential growth, but found the AI section frighteningly poignant

jan_prellert's review against another edition

Go to review page

challenging informative inspiring reflective medium-paced

4.0

ben_mullenger's review against another edition

Go to review page

challenging hopeful informative inspiring reflective

4.25

elsazetterljung's review against another edition

Go to review page

challenging informative inspiring reflective medium-paced

4.25

huddycleve's review against another edition

Go to review page

reflective slow-paced

1.75

I am very fond of the basic premises of longtermism, so it is extremely unfortunate that it has been heralded and dominated by the worst tech bro brain rot imaginable. This book reads like a quaint 1960s item that we look back on with a shrug at its sheer simplicity. Ord hands the highest existential risk rating for humanity to general A.I., which is a claim operating in a reality alternate to our own. And yes, I am aware that the claim is *existential* risk, not just general risk — I still think the fearmongering of AI by Silicon Valley randos is warranted on a small scale, but when made into an EXISTENTIAL risk, makes me think that these guys have been reading way too much Isaac Asimov (and not critically engaging with his work, to boot). For a taste: the possible future Ord describes, one ruled by AI, is, to paraphrase/summarize: one in which humanity has its autonomy taken from it by a proportionally small conglomerate of intelligent “life” that makes cost-benefit analyses without human wellbeing/happiness/utility in mind or priority; and, which even in the absence of physical robotic avatars to utilize a monopoly on violence to wrest control of decisionmaking processes itself, could feasibly entice a sufficient portion of humans to act as executors of its will.

Toby, my dude, you’ve just restated the problem of neoliberal, late stage capitalism. This is already in effect, albeit without a scary Hal-9000 steering the ship.

Anyway, my biggest gripe with the dominant strains of longtermism (other than a myopic praise of individualist effective altruism) is that it seems to focused on preserving the human *species*, and not humanity as a concept. It both bends over backward to abstract itself, to try to make readers imagine humanity thousands or even millions of years hence; while also feeling a step or two away from a weirdly genetic anthropocentrism. 

Let me put it this way: I am less interested in humanity’s ultimate “potential,” and more in securing a place for people to freely be themselves with no worries over food, housing, infrastructure, etc. That is, if you were to put two things on a scale — (1) the occurrence of a quiet, nonviolent extinction of humanity after a sustained era of universalized peace and general prosperity (be it a century, millennia, or more from now) VS. (2) the occurrence of a small-scale genocide in a timeline where humanity somehow achieves, way further down the line, a kind of infinite existence — I would absolutely opt for the first. Whereas I think Ord is OK with foregoing the cessation of major conflict (even if that means that humanity will certainly go extinct in a nearer future, albeit peacefully) so long as this “potential” is ultimately met. That’s the key difference; to put it plainly: the interest in preserving a base genetic lineage and an at least modern form of “civilization” (dare I say, WESTERN civilization??) is greater for the longtermist than is solving systemic problems that lead to general prosperity, even if that means humanity inevitably goes extinct and maybe our tech won’t be quite as cool as it could have been. 

Also, Ord’s writing style is … terrible. It would be fairly impressive for a high school essay in an ethics 101 class. But the style is turgid and can only get across overly simplistic ideas. Other gripes: lack of intersectionality (“humanity” is put on entirely the same scale, with only lipservice paid to the existence of systemic inequities); really ass-pully “math” (he just “quantifies” the risk factor of every item discussed, with general AI for instance being a one in six chance of causing the end of humanity … like, what?); and zero engagement with philosophy/critical theory that has been mulling over this basic question for decades.

Conclusion: Ord needed someone to actually tell him “no” or “bad idea” more often when he was young, and — if you’re reading this Prof Ord, it’s ok to fear death, we don’t have to construct an ineffective socio-philosophical movement to eliminate that fear (which is not possible, btw). Just let the void take over in the end…

eralon's review against another edition

Go to review page

4.0

This book conveys the philosophical line of thinking that in the realm of philanthropy, we have the largest responsibility to the future since there's a potential for the most number of humans in potentially the time of greatest human flourishing. Mostly it discusses the potential threats to this amazing future.

ellemental's review against another edition

Go to review page

1.0

i had to read this book for the my required environmental and technological ethics class, so i’m prefacing this review by saying that i did not voluntary read this book and i am uninterested in the study of ethics as a whole. i think ord’s exploration of existential risks and humanity-destroying catastrophes is interesting, but i think he neglects that each of these catastrophes will generally have a build-up that will affect marginalized groups more than others and that we can’t ignore the smaller problems to focus on the big ones. it is necessary to study and understand events that could destroy humanity and our planet, but we can’t do that at the expense of the current populations, especially those that are being faced with more pertinent issues like war, poverty, et cetera. this book is not my cup of tea and i hopefully won’t have to read any of his works again.