Wow, this was a good one.. Several times along the way I had to stop reading this so I could just sit and think about the implications of it. This book puts human intelligence into a very tangible context, taking almost all of the mystic out of it and boiling it down to a formal model that we can work to replicate and improve with technology. I don't yet know enough about this all to really challenge the claims, or enough to judge their plausibility for myself - but if just half of what's in this book is true, the next 100 years are going to be the most turbulent in the history of our species. Kurzweil compellingly argues that it's possible to replicate a brain with computers, and that once we do it will quickly become possible to essentially make ourselves super smart and immortal - oh, and we'll all be machines. Good times. A few memorable points:
- Interesting observation about how brains recognize visual information - we "sparse code" it, meaning that our eyes/brains reduce the information to the least amount necessary. When we "see" something, we're really only literally observing a few characteristics of the object - the edges, the shading, etc. Then our brains "fill in" the rest with whatever pattern we expect to be there. This is far less cognitively expensive for us than than to actually try to notice every little detail of everything, all the time.
- Great discussion towards the end of the book about free will, with many compelling examples of confabulation in split-brain patients. It's very clear that our actions are not always as determined as we think they are, rather, we "decide" at some point, often unconsciously/intuitively, and then our brains are fantastic machines at giving that decision a narrative. Thus, we don't critically reason, and then make a decision. We make the decision, and then use reason to justify having done so. This is super creepy stuff, but compelling.
- In discussing the implications of advancements in artificial intelligence, Kurzweil spends some good time discussing his thoughts on the nature of consciousness. He reasonably assumes that we will soon have such sufficient AI as to be able to create beings with personalities (think Transformers or I, Robot) and asserts that such empathetic characters, despite being non-biological, are conscious. "If you do accept the leap of faith that a non-biological entity that is convincing in it's reactions to qualia is actually conscious, then consider what that implies. Namely, that consciousness is an emergent property of the overall pattern of an entity, not the substrate it runs on."
- Scientists are working on an artificial hippocampus, which is the part of your brain the recognizes novel events and stores them to memory. In rats, they've been able to insert the artificial hippocampus (with an on/off switch) into a rats brain, replacing the original one. When they turn the switch on, the rats gain "knowledge" stored in the artificial hippocampus, when they turn it off, the rat loses the knowledge. Similarly, instead of replacement, when they load up a rat with an "extra" hippocampus, it learns tasks much faster. The implications here for human brain augmentation are amazingly powerful. Not only is the "I know Kung Fu" scene in The Matrix totally possible, but this really made me expand what I considered bionics and brain activity to be. I have often thought about what people would be like with larger brains, or what a biological superior to humans would be (for instance, in the same way we're superior to cats). The answer is obvious - instant learning. Unlimited working memory. It's not x-ray vision or jax-arms like you'd see in comic books, but it will be many-order-of-magnitude increases in intelligence that mark the transformation. This is not science fiction. It's happening right now. Honestly, how long until we're immortal?
- Most of the book I was reminded that, in all likelihood, we are already living in a computer civilization.