addabook home timeline gallery
signup or login
Superintelligence
Paths, Dangers, Strategies
Nick Bostrom
read on November 1, 2014

This is probably the most disappointing book of the year. It's not bad, it's just really unpleasant to read. I expected it to be fantastic, given the below:

  • Great premise: That advanced machine intelligence is coming, and that we may not be able to control it once created.
  • Great author: Bostrom was early to the 'Matrix' style argument that we are likely already living inside a computer simulation.
  • Great cover. I mean, right?

Anyway, with all that going for it, the book is barely tolerable. Bostrom's prose and general narrative is epically boring. This book became a massive roadblock in my usual reading volume just because I avoided finishing it for so long - seemed like such a waste of time. It wasn't all bad though. There were several things I did like:

  • First, Bostrom makes a clear argument that superintelligence (which he defines very formally over many pages but which I can here sufficiently define just as completely by saying "something very, very much smarter than human beings") is coming, probably inevitably. Humans are making progress at genetically selecting embryos to be smarter. We're also getting really good at scanning/mapping/recreating biological brains digitally, which we would then be able to improve upon. And lastly, it looks like we might actually be getting somewhere with artificial intelligence software. Those three things combined make it pretty much inevitable that we will be able to build/design superintelligent agents in anywhere from 50 to a few hundred years. (Bostrom also makes a compelling, if not obvious, case that this will happen sooner than we think due to the snowballing effect of building things that are more intelligent than the builder).
  • At one point, Bostrom characterizes humanity's defining characteristic not to be our intelligence, but our unique(ish) ability to preserve knowledge across generations. I thought this was interesting. 
  • Bostrom describes (but, I don't recall if he names) a broad set of real-world problems that behave very much like jigsaw puzzles; often the beginning of problems can be easy to solve because you know the constraints, and the end of problems can be easy to solve because you have a clear goal and can easily see what is still required to be done (and how) before it is achieved. It is the middle portion of problems that are hard. Those are the ambiguous times where you don't really know how to move forward, or what strategy can be used to make progress. I'm not sure this comparison does anything helpful in terms of helping a person actually solve problems, but it was interesting.
  • Bostrom points out that humans (unsurprisingly) have a very anthropocentric understanding of general intelligence. The same way we think of "cold" as being around 0 degrees, and "hot" as being around 100 degrees, we think of "unintelligent" to be a toddler, and "very intelligent" to be a rocket surgeon. There is no reason for this to be so. I mean, it's useful for us to think of things that way, since that is what is relevant to our usual lives, but we need to understand that "intelligence" can run a gamut much wider than our typical understanding. Humans are likely to lump everything dumber than a toddler into "stupid" and therefore fail to see or appreciate when significant progress is being made in artificial intelligence. For example, if we pretend that the complete range of all possible intelligence went from 0 to 100, human intelligence (toddler to Rainman) might be from 31 to 36 on that scale. Because of that, we often may fail to appreciate when we're able to increase robotic/artificial intelligence from 10 to 12 on that scale, because in our view such an advancement is still from "completely unintelligent" to "completely unintelligent". And in that same spirit, once we do start making significant progress in this feild, that progress may come very, very quickly. We may take 20 years to go from 0 to 10 on the scale, then 20 years to go from 10 to 20 on the scale, then 20 more years to go from 20 to 30. At that point (arguably, right about now) we'll feel like we're "finally" making progress. Then, in the next 20 years we may go from 30 to 40! (And keep in mind, once we're past the "36" Einstein point, it stands to reason that the robots themselves would start speeding this progress up). In the 20 years after that we could go from 40 to 50+, maybe 60, maybe 80! Anyway, point being this could explode quickly.

Unfortunately, these interesting tidbits are the exception, not the rule. I have no idea who the audience of the book is supposed to be. Engineers working on advanced AI maybe? The book reads like a self aggrandizing "look at all the stuff I'm already thinking about, I'm way ahead of you guys", without actually being useful in explaining how to build an advanced AI. I'm glad this book exists, and I'm sure someone will adore it, and I'm sure it will encourage more research, curiosity, and attention to this field, but I can't recommend this to real human beings that I know.

Author Bio:

Nick Bostrom is a Swedish philosopher at St. Cross College, University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, the reversal test, and consequentialism.