A review by bogdanbalostin
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

5.0

While I think this book touches on a few important lessons regarding the development of the AI, it contains too much math. Yes, that's my only complaint. There is math in a serious scientific book. In this case, the math is useless. It tries to express philosophical ideas using mathematical formalism, and instead of making the idea easier to explain, it just complicates the subject needlessly. Okay, so the math is in special boxes that a reader can just skip if she wants. But still, I wanted to get all the information available and I still think that math is pointless at this point in time, in the context of this book.

I already told you the book is not easy. It's not a popular science book, but an academic text. Beware.

Should I be convinced of the AI coming in a few decades? I know software developers don't believe in AI and they think they know best how software development works (which they do) but the point in this book is not that we are building the AI, but the AI (AGI) just suddenly appears. The jump from non-general intelligence to general intelligence is sudden and consists not in slowly developing a complex system, but in a sudden idea of how to connect existing technologies. Notice: this is my opinion.

Should someone interested in AI philosophy read this book? Definitely, if you have a strong interest. If you are just curious about the moral and economic implications of AI, I think there are more shallow books on the subject (I am in the process of discovering them).

So, Superintelligence deals mostly with the control problem and mitigating the risks of new technology. And I agree with most of the message. Start early. New technologies were always destructive, not because they were intrinsically evil, but because humans used them for evil. Maybe the AI will be worst than the nuclear crisis, but at least we should be able to learn and think about the risks first. To mitigate the risks, one must work on the problem, not let others with less experience discover the new technology first.

Personal note: Looking around world events, I am pessimistic about the fate of new technologies, when we don't learn anything from our history and we keep making the same mistakes. Maybe the AI will solve all our problems because we are too stubborn to solve them ourselves.