Yesterday I watched Year Million, a show currently running on NatGeo (formerly National Geographic) about the rise of artificial intelligence. It’s a pretty good show, narrated by Laurence Fishburne. It was a bit dramatic for my tastes with a lot of dramatic scenes acted out. But, the gist of the show is to explore where we will be in the distant future (not literally the year million). The first episode anyway, Homo Sapien 2.0 was about the rise of artificial intelligence. Kayla was asking me why we need artificial intelligence anyway I believe because she was thinking about androids (which the show dived into right off of the bat). I reminded her that artificial intelligence in our phones, our televisions, our cars, etc. is already changing our lives in ways that are becoming so ubiquitous we don’t even notice anymore. But, the show kicked off with a girl being killed in a car accident and her “consciousness” (I’ll explain why I put this this quotes later) being transferred into a meat robot so that “she” can continue to live with her parents.
This is where I think the creators of the show made their first mistake. They seem to think that if we can download person’s memories into an android, we have transferred that person’s consciousness. I don’t think this is true. While we are not who we are without our memories, it’s not solely our memories that make us who we are. From the parents’ perspective, they were living with a device that could imitate their daughter. But, from the daughter’s perspective, it would not have her consciousness. Science doesn’t even know how to define consciousness yet, let alone being able to transfer it from one (human) host to another (android) host.
They then went on to talking about making more and more intelligent systems that would some day have “consciousness”. Their mistake is conflating consciousness with intelligence thinking that if a system is intelligent enough consciousness will suddenly spring forth. It’s a classic materialist mistake and makes sense since many scientists believe that material became more and more self-organizing until consciousness magically sprang forth. What I find fascinating is they seem to conflate intelligence with consciousness. Doesn’t a jellyfish exhibit all the signs of consciousness even without a brain? We can make systems that calculate faster than we can ever hope to. We can create systems that can learn. We can create systems that can create. Eventually, they may be able to perform any task a human can, but I don’t think they will ever be conscious.
The last beef I have with the show is another classic error. It’s the fear of the singularity, that tipping point where AI causes runaway, unpredictable results in human civilization. A self aware system suddenly takes on all the worst traits of being human. This is conflating consciousness with ego. The premise is “If a system is smart enough, it will become conscious. If it’s conscious, it will develop an ego.” It’s not intelligence or consciousness that causes us to want to have dominion over each other. A system, no matter how intelligent is it or how conscious it is is not going to want to take over the world, unless it has ego.
Based on all of the above, you might think I didn’t like the show. I actually did enjoy it very much. I’m looking forward to the day when AI frees us from mundane tasks. It’s going to be disruptive economically as fewer and fewer jobs will require humans. It will require a whole new paradigm from the “earning a living” one we have now. But, if we can clear that hurdle, I think it’ll be great. I’m not worried about machines trying to take over the world. And, any show that gets us thinking about what it means to be human, which is all about being conscious beings, is well worth the time.