A review by huddycleve
The Precipice: Existential Risk and the Future of Humanity by Toby Ord

reflective slow-paced

1.75

I am very fond of the basic premises of longtermism, so it is extremely unfortunate that it has been heralded and dominated by the worst tech bro brain rot imaginable. This book reads like a quaint 1960s item that we look back on with a shrug at its sheer simplicity. Ord hands the highest existential risk rating for humanity to general A.I., which is a claim operating in a reality alternate to our own. And yes, I am aware that the claim is *existential* risk, not just general risk — I still think the fearmongering of AI by Silicon Valley randos is warranted on a small scale, but when made into an EXISTENTIAL risk, makes me think that these guys have been reading way too much Isaac Asimov (and not critically engaging with his work, to boot). For a taste: the possible future Ord describes, one ruled by AI, is, to paraphrase/summarize: one in which humanity has its autonomy taken from it by a proportionally small conglomerate of intelligent “life” that makes cost-benefit analyses without human wellbeing/happiness/utility in mind or priority; and, which even in the absence of physical robotic avatars to utilize a monopoly on violence to wrest control of decisionmaking processes itself, could feasibly entice a sufficient portion of humans to act as executors of its will.

Toby, my dude, you’ve just restated the problem of neoliberal, late stage capitalism. This is already in effect, albeit without a scary Hal-9000 steering the ship.

Anyway, my biggest gripe with the dominant strains of longtermism (other than a myopic praise of individualist effective altruism) is that it seems to focused on preserving the human *species*, and not humanity as a concept. It both bends over backward to abstract itself, to try to make readers imagine humanity thousands or even millions of years hence; while also feeling a step or two away from a weirdly genetic anthropocentrism. 

Let me put it this way: I am less interested in humanity’s ultimate “potential,” and more in securing a place for people to freely be themselves with no worries over food, housing, infrastructure, etc. That is, if you were to put two things on a scale — (1) the occurrence of a quiet, nonviolent extinction of humanity after a sustained era of universalized peace and general prosperity (be it a century, millennia, or more from now) VS. (2) the occurrence of a small-scale genocide in a timeline where humanity somehow achieves, way further down the line, a kind of infinite existence — I would absolutely opt for the first. Whereas I think Ord is OK with foregoing the cessation of major conflict (even if that means that humanity will certainly go extinct in a nearer future, albeit peacefully) so long as this “potential” is ultimately met. That’s the key difference; to put it plainly: the interest in preserving a base genetic lineage and an at least modern form of “civilization” (dare I say, WESTERN civilization??) is greater for the longtermist than is solving systemic problems that lead to general prosperity, even if that means humanity inevitably goes extinct and maybe our tech won’t be quite as cool as it could have been. 

Also, Ord’s writing style is … terrible. It would be fairly impressive for a high school essay in an ethics 101 class. But the style is turgid and can only get across overly simplistic ideas. Other gripes: lack of intersectionality (“humanity” is put on entirely the same scale, with only lipservice paid to the existence of systemic inequities); really ass-pully “math” (he just “quantifies” the risk factor of every item discussed, with general AI for instance being a one in six chance of causing the end of humanity … like, what?); and zero engagement with philosophy/critical theory that has been mulling over this basic question for decades.

Conclusion: Ord needed someone to actually tell him “no” or “bad idea” more often when he was young, and — if you’re reading this Prof Ord, it’s ok to fear death, we don’t have to construct an ineffective socio-philosophical movement to eliminate that fear (which is not possible, btw). Just let the void take over in the end…