Thursday, July 3, 2014

The religion of Singularity

I quite enjoyed this article arguing that the singularity is merely a bizarre modern day tech religion. Obviously I don't think the author is right. But he's not entirely wrong either. The article’s singularity skepticism hinges on whether we will ever achieve human level artificial intelligence, and one reasonable data point someone might consider in that regard is the current level of artificial intelligence. Currently the most cutting edge AI is extraordinarily primitive compared to human intelligence, but perhaps we could get a rough sense by comparing to other animals.  It's pretty clear that our robotic dogs are only the palest shadow of a biological dog. My guess is that our most sophisticated AI techniques probably couldn't pilot a cockroach successfully through its day (even if we had a super computer small enough to put into the brain of a cockroach). On the other hand, there is no biological intelligence (that we know of) that can play chess as well as my $1,000 laptop, or Jeopardy as well as Watson. Right now, biological and artificial intelligences excel at different tasks. Nevertheless, apart from a few very narrow examples (such as chess and Jeopardy) computer intelligence does not appear close to equaling the most important abilities by which we typically define human intelligence.

Furthermore, there really isn't any way to know when computer intelligence will get there. Kurzweil takes some guesses that seem plausible to me, but seem outlandish to many people in and out of the field of artificial intelligence. Everyone who believes that human intelligence is computable has their own intuitions about when computers will first reach that level of intelligence. Ultimately, Kurzweil's estimate is based on informed intuition, but so are the estimates of those who think we are no where close. Only time will tell who is right. Personally, my intuition is not that far from Kurzweil's--maybe 10 or 20 years later than his estimate. I accept the fact that that is a guess. But does the author of the article accept that his estimate is a guess?

I think the one thing we can confidently say on this point is that the ones who are really religious are those who, based on faith, believe that human intelligence cannot be replicated through human invention. There are many varieties of that faith: from an actual belief in god and the immortal soul--implying that human intelligence is not a manifestation of a physical process, and thus cannot be replicated though a physical human invention--to a mere belief that humans aren't up to the task of creating something so marvelously complicated. But whatever the variety, all refutations of the singularity based on the premise that we cannot replicate human intelligence appear to be based on faith or dogma.

Hopefully it is obvious to most readers how a refutation of the singularity based on the immortal soul endowed by god rests entirely on unsubstantiated dogma. But what about the assertion that humans may simply not be intelligent or inventive enough to create something as marvelously complicated as the human brain? I think the weight of science clearly opposes that view, and to see why it is helpful to consider how the human brain was created to begin with. Evolution is one of the most strongly supported theories in all of science. Animal brains are organs evolved through a process of random variation and natural selection of those variations that most contributed to survival and reproduction. The human brain is merely an extreme example of an animal brain, and was thus created through the same process of random variation and natural selection. For the purposes of this discussion, one important aspect of that process is that the variation was random. There was no intelligent designer sitting around and thinking about the next best variation to try. A belief that humans are not intelligent enough to ever design something as intelligent as we are thus hinges on the assumption that despite all of our intelligence, our attempts to improve machine understanding will be worse than random.

I've never seen any evidence to support that position and there is quite a bit of evidence to the contrary. Whereas it took evolution billions of years to evolve even the most rudimentary intelligence, in the space of just one hundred or so years since the first principles of computing were articulated, human's have designed machines intelligent enough to safely drive cars, defeat the best human chess and jeopardy players, analyze and respond usefully to human language, coordinate complex movements of quadrupeds and bipedal robots for walking and running, and so forth. So far we seem to be progressing quite a bit more efficiently than we would through random variation.

Exactly how much more efficiently is a reasonable topic for debate. But it is not reasonable to propose that humans are simply incapable of reproducing human level intelligence because the task it too immense for our weak intelligence. Given that we seem to be progressing much faster than evolution did, it can be reasonably argued that it will take a long time, but not that it is impossible. The argument for impossibility appears to contradict what we've learned through science about the evolution of our own intelligence.

Though it’s impossible to say how much faster AI is progressing than biological intelligence evolved, it is interesting to speculate about. When estimating how long it has taken humans to design cutting edge AI, do you start counting at birth of the first modern human, or at the birth of the first computer? And how intelligent is our cutting edge AI? There isn’t much evidence that it is as smart as a cockroach, for instance, because it has never been used to do all the things that a cockroach does. On the other hand there is no biological intelligence that can play chess or jeopardy as well as a computer. So it is hard to measure how far we have come.

Just for kicks, lets suppose that our cutting edge AI is roughly as intelligent as a cockroach. Life started evolving on earth in the neighborhood of 4 billion years ago, cockroaches first appear in the fossil record about 300 million years ago, and humans first appear about 200 thousand years ago. Thus the time it took evolution to go from cockroach to human was about 7.5% of the time it took evolution to go from nothing to cockroach. If you figure that it will take proportionally as long for humans to create human level intelligence as evolution did, and you start timing how long we've been working on it from the appearance of the first humans, then you might guess it will take another 15,000 years. But if you start counting from when the first principles of computing were laid out roughly 100 years ago, then you might guess it will take in the neighborhood of another decade.

Which is the more reasonable place to start counting? I don't think there is a good answer to that question, but I'll indulge in some recreational speculation nonetheless. Perhaps we can approach that question by examining the critical differences between evolution as a process for developing intelligence versus human invention. With evolution we take (1) random variation and (2) select those that have the highest reproductive fitness. It’s important to recognize that reproductive fitness is not the same thing as intelligence. Reproductive fitness is affected by many different factors and in many cases greater intelligence puts constraints on an animal that decrease it’s fitness. For instance, the brains of some bird species, whose bodies are highly streamlined to improve flight efficiency, actually seasonally shrink to reduce overall weight. In that case the added intelligence of a bigger brain is not worth the added weight. Similarly, an important factor that influences brain size in all animals is the comparatively higher energy consumption of brain matter. The larger the brain, the more energy it uses, and the more likely an animal is to starve to death during food shortages. For these reasons evolution often selects for less intelligent animals, and intelligence is only loosely related to reproductive fitness.

The process by which evolution created human intelligence can be contrasted with the process by which human are working to create artificial intelligence. AI researchers are (1) experimenting with carefully chosen and designed variation, with (2) the explicit goal of maximizing intelligence. Thus (1) where variation in evolution is random, in AI research it is intelligently designed and (2) whereas the fitness function for evolution is only loosely related to intelligence, in AI research the fitness function is very closely related to intelligence.

The second point is worth considering in more detail. The first time humans started to use intelligence as the fitness function for their inventions was the birth of the field of AI research. Before that there was no AI of any sort, and after that rudimentary forms of AI started to develop. All the progress that we have made in the field of AI has been made since the creation of the field of AI research, and this kind of focused directive is something that has never been part of the process of biological evolution. Because this is really the start of where humans embarked on the creation of AI, this seems like a more natural place to draw the comparison, though some might argue that it makes more sense to go back a few thousand years to the first developments of math and logic. At any rate, if you count from the birth of the AI field, we are looking at perhaps another decade before human level AI, and if you count from the beginning of known western philosophy we are looking at perhaps another century (assuming we've reached cockroach level AI, and that going from cockroach to human level AI takes proportionally as long as going from cockroach to human did).

Of course we won’t know the right comparison until after we achieve human level AI, but to me it is interesting to consider nonetheless merely to get a sense of the range of plausible possibilities. To summarize, the only example we can use to estimate the difficulty of developing human level intelligence is the only example of human level intelligence that we know of: humans. After taking about 4.2 billion years to evolve cockroach intelligence, it took only another 300 million years to evolve human intelligence. If the same proportions held for human design of AI, and you start counting at (1) appearance of the first modern humans, (2) the first records of the development of math and logic, or (3) the development of the field of AI, then we should achieve human level AI in roughly 15,000 years, 100 years, or 10 year, respectively.

But, despite the fact that there is a wide range of reasonable possibilities for the timeline of achieving human level AI, all the arguments that I have come across that humans will never achieve AI appear to be based on dogma, and to contradict what we know about how human intelligence was created and how humans are researching AI.  Thus to the extent that the author of the the article purports to refute the Singularity based on the impossibility of creating human intelligence, he appears to be doing so based on the very kind of dogmatic irrational thinking that he falsely assumes Singularity believers are engaged in.

But I do think there is another interesting aspect to the article.  Once someone understands and adopts the notion of the singularity, it can play something akin to the role that religion plays in a religious person's life. It is an organizing principle. It provides structure and hope. It doesn't explain how the universe began, or how it will end, but it perhaps provides a glimmer of hope that we, or our future incarnations, may someday have more insight into those questions. It will not give us solace that our loved ones who have perished are happily looking down upon us from heaven, but it does hold out the tantalizing possibility of immortality for ourselves, and our living loved ones, and perhaps resurrection of our lost ones. It does promise some possibility of eliminating war, starvation, disease, poverty, etc. And to the extent that its promises are things we deeply want, it can provide some meaning and direction to our lives.

So I think what the article gets wrong is not so much equating the singularity to religion, but rather ridiculing it because of the parallels. We live in a world where it is acceptable to believe that when you take communion the cracker you eat turns into the flesh and blood of Jesus Christ in your body, but it is somehow fringe and bizarre to believe that human intelligence is the manifestation of physical processes that we will one day be able to replicate?

If we humans do not drive ourselves to extinction first, we will one day reach the singularity. It will change everything. Any understanding of our place in the universe and the meaning we find in our lives that does not recognize and embrace that is missing something important. It makes sense for the singularity to have an impact on our spiritual lives. It would be weird if it didn't.