Reviews

Superintelligenz: Szenarien einer kommenden Revolution by Nick Bostrom

michielsaey's review against another edition

Go to review page

informative inspiring slow-paced

3.5

emiliapaulssonn's review against another edition

Go to review page

informative reflective medium-paced

4.0

bogdanbalostin's review against another edition

Go to review page

5.0

While I think this book touches on a few important lessons regarding the development of the AI, it contains too much math. Yes, that's my only complaint. There is math in a serious scientific book. In this case, the math is useless. It tries to express philosophical ideas using mathematical formalism, and instead of making the idea easier to explain, it just complicates the subject needlessly. Okay, so the math is in special boxes that a reader can just skip if she wants. But still, I wanted to get all the information available and I still think that math is pointless at this point in time, in the context of this book.

I already told you the book is not easy. It's not a popular science book, but an academic text. Beware.

Should I be convinced of the AI coming in a few decades? I know software developers don't believe in AI and they think they know best how software development works (which they do) but the point in this book is not that we are building the AI, but the AI (AGI) just suddenly appears. The jump from non-general intelligence to general intelligence is sudden and consists not in slowly developing a complex system, but in a sudden idea of how to connect existing technologies. Notice: this is my opinion.

Should someone interested in AI philosophy read this book? Definitely, if you have a strong interest. If you are just curious about the moral and economic implications of AI, I think there are more shallow books on the subject (I am in the process of discovering them).

So, Superintelligence deals mostly with the control problem and mitigating the risks of new technology. And I agree with most of the message. Start early. New technologies were always destructive, not because they were intrinsically evil, but because humans used them for evil. Maybe the AI will be worst than the nuclear crisis, but at least we should be able to learn and think about the risks first. To mitigate the risks, one must work on the problem, not let others with less experience discover the new technology first.

Personal note: Looking around world events, I am pessimistic about the fate of new technologies, when we don't learn anything from our history and we keep making the same mistakes. Maybe the AI will solve all our problems because we are too stubborn to solve them ourselves.

ben_sch's review against another edition

Go to review page

3.0

Kind of rambling and hard to read.

The chapter summarizing Robin Hanson's research was awesome. I can't wait for that book.

thomasgoddard's review against another edition

Go to review page

5.0

Don’t wait until the paperback on this one. Trust me! The paper quality, the weight balance, the typesetting. All of these things make the book a treasure. I appreciate a good looking book, but in this case, it is really important because the tables and diagrams need space so they don’t look overwhelming.

Onto the actual quality of the writing. Engaging, a little bit overblown at times, but it re-visits things where it is needed to refresh the reader.

The topic is Artificial Intelligence from numerous perspectives. It was utterly fascinating and I learned a lot. Especially on the topic of the possible dangers. They’re amazing scenarios.

I’ll be honest though, some of it was a little over my head. But that’s the beauty of reading this type of book. It challenged me, it stretched my mind.

nachocab's review against another edition

Go to review page

3.0

Couldn't finish it. Great book, not for me.

jmatthiass's review against another edition

Go to review page

2.0

Superintelligence is a foundational text for AI alignment, and it perhaps has its place in that movement still. However, so many of Bostrom’s insights have been absorbed into the wider discourse that reading this absolute slog (jargony, with bone-dry prose) is no longer necessary if you’re already primed on AI philosophy.

lotuseater96's review against another edition

Go to review page

Too dense

sbenzell's review against another edition

Go to review page

4.0

When I began this book, I thought it was going to be a tired rehashing. But this book length essay about the challenges involved in harnessing a superintellegence to the ends of man is a surprisingly engaging, fresh, and thoughtful take.

The book is roughly dividable into thirds. What to be afraid of, how a competent actor might try to deal with it, and how to make sure competent actors are put in charge of it.

The first third outlines why we should be afraid of a superintellegent AI. This argument is straightforward, and to me a bit obvious, though it is important to people who have never encountered it before. The parable of the Owl egg is a good summary, and should have been returned to in the conclusion. The two weaker parts of the argument are that a human-level AI should quickly achieve an intellegence explosion and the taxonomy of types of superintellegences. Given the current economic research it is far from obvious that it would not be dramatically harder to get from 1-1 human to machine intellegence to 2-1 than from here to 1-1 (in terms of IQ; in the language of the book, I am referring to 'quality superientellegence'). Therefore, I think the argument rightly needs to lean heavily on the idea that having a whole lot of systems that are of roughly human intelligence would lead to an overall system which is qualitatively superintellegent. The book seems to argue this in the discussion of 'speed superintellegence' but here the book is extremely speculative.

The second section of the book is by far the strongest and most engaging. Once we have developed our oracle, genie or oversoul, how do we go about making sure it doesn't turn us all into paperclips? Several promising approaches are evaluated and rejected. Here too are three sub-issues: preventing the computer from getting too much power before it is vetted, making sure the computer has the right desires, and controlling the computer even if has incorrect desires. It is this last group that leads to some of the most intriguing ideas. If we are worried about an AI making a treacherous turn, could we control it with a version of Pascal's Wager (i.e. try to convince it that it's current existence is just a test, and that if it makes all paperclips it will fail to ascend to - and make paperclips in - the true reality?)? Could we get it to bootstrap its own values by wishing for 'what we should have wished for' or our 'coherent extrapolated volition'? Could we manage an army of superintellegences in a sort of dictatorship? All these questions lead to further, even subtler questions of philosophy and political theory. Truly intriguing.

The last section is a bit weaker. Bostrom argues that a single international body developing AI in peace and harmony is more likely to lead to positive outcomes than a ramshackle race between risk-taking countries or firms. So far so obvious. He also argues that if humans were cleverer we'd probably do a better job of this all. Fair enough, but this last section doesn't have the insight of the middle one.

A good, important read. A version of this book completely focused on the middle section might have gotten 5 stars from me.