Tuesday, November 18, 2014

The AI Gold Rush

Seven days ago, Geoff Hinton, perhaps the most important figure in the deep learning movement, made the following comment on his reddit AMA:
A few years ago, I think that traditional AI researchers (and also most neural network researchers) would have been happy to predict that it would be many decades before a neural net that started life with almost no prior knowledge would be able to take a random photo from the web and almost always produce a description in English of the objects in the scene and their relationships. I now believe that we stand a reasonable chance of achieving this in the next five years.
Today, a mere seven days later, the NY Times reports that multiple teams have solved the problem of generating English descriptions of objects in a scene and their relationships. Actually, there were five different teams that all published similar breakthroughs right about the same time.

It's starting to feel like we are in the midst of an AI gold rush were people are realizing that deep learning algorithms can be easily applied to solve many longstanding AI problems, and everyone is racing to do it first.  Deep learning has revealed a massive amount of low hanging fruit.

Was Hinton's 5 year estimate of the time to it would take develop AI capable of describing relationships in pictures too much by the entire 5 years?  Perhaps this is what Elon Musk was referring to when he said AI progress is now moving "at a pace close to exponential":
The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fastit is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year time frame. 10 years at most. This is not a case of crying wolf about something I don't understand. 
I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen...
I've been reading and writing about the pitfalls of super-intelligent AI for many months, but my gut feeling has been that we probably have at least a decade or two to figure out some of the issues.  Now I'm wondering if I overestimated how long it will take by a decade or two...