Thursday, December 4, 2014

Faith based denial of the Singularity

Here is another article comparing concern about and interest in the singularity to religion.  The argument boils down to:  early AI researchers, at the very beginning stages of investigating the challenges of building intelligent machines, predicted the singularity would happen really soon, but it didn't.  Therefore, the singularity will never happen or, is so far off that, it is not worth thinking about and planning for.  That is precisely his argument and, as stupid as it is, I know a lot of smart people who feel that way.

I would concede that people who speculate that the singularity is coming soon don't have much solid evidence to point to.  But the same is true of people who speculate that it is a long way off.  One difference is that those speculating it will come soon do at least articulate coherent arguments about the trends that support their speculations.

For instance, deep learning neural nets appear to be scaling very well with increased data and processing power and have made brain inspired AI architectures a part of everyday life.  Long standing trends of increasing power of information technologies mean they will soon have power equivalent to our best estimates of the power of the human brain.  And, two massive projects to understand the brain, on the scale of sequencing the genome, are just getting underway.  Is it foolish to think that a combination of (1) a better understanding of how the brain works and (2) computing power equivalent to the processing power of the brain, might allow us to build computers that can perceive and think as well as the human brain?

A twenty or thirty year time frame for the singularity may not be a sure thing, but it certainly isn't foolish to think it may happen in that time.  What is foolish is certainty that it won't happen in that time frame, merely because it hasn't happened yet.  As comforting as that certainty may be, it is pure faith.

Thursday, November 27, 2014

Stuart Russell nails it with analogy between development of nuclear power and AI

Russell makes the following analogy in the same conversation where Elon Musk predicts a dangerous AI event in 5 to 10 years:
"We switched everything off and went home. That night, there was very little doubt in my mind that the world was headed for grief."

So wrote Leo Szilard, describing the events of March 3, 1939, when he demonstrated a neutron-induced uranium fission reaction. According to the historian Richard Rhodes, Szilard had the idea for a neutron-induced chain reaction on September 12, 1933, while crossing the road next to Russell Square in London. The previous day, Ernest Rutherford, a world authority on radioactivity, had given a "warning…to those who seek a source of power in the transmutation of atoms – such expectations are the merest moonshine."

Thus, the gap between authoritative statements of technological impossibility and the "miracle of understanding" (to borrow a phrase from Nathan Myhrvold) that renders the impossible possible may sometimes be measured not in centuries, as Rod Brooks suggests, but in hours.

Good mainstream press coverage of Superintelligence and early precedent for regulating AI research

These articles in the mainstream press are giving decent coverage of the dangers and opportunities of AI research without making big mistakes or presenting weak counter arguments:

It seems like Bostrom, Hawking and Musk are quickly starting to have an impact on our public debate and awareness of these issues.  I think that is pretty hopeful.  If this becomes an area of intense research and concern (similar to climate change) perhaps we will survive the singularity after all!

On a related note, this seems like an interesting early precedent for regulating scientific research that poses existential threats, such as AI research:

Tuesday, November 18, 2014

The AI Gold Rush

Seven days ago, Geoff Hinton, perhaps the most important figure in the deep learning movement, made the following comment on his reddit AMA:
A few years ago, I think that traditional AI researchers (and also most neural network researchers) would have been happy to predict that it would be many decades before a neural net that started life with almost no prior knowledge would be able to take a random photo from the web and almost always produce a description in English of the objects in the scene and their relationships. I now believe that we stand a reasonable chance of achieving this in the next five years.
Today, a mere seven days later, the NY Times reports that multiple teams have solved the problem of generating English descriptions of objects in a scene and their relationships. Actually, there were five different teams that all published similar breakthroughs right about the same time.

It's starting to feel like we are in the midst of an AI gold rush were people are realizing that deep learning algorithms can be easily applied to solve many longstanding AI problems, and everyone is racing to do it first.  Deep learning has revealed a massive amount of low hanging fruit.

Was Hinton's 5 year estimate of the time to it would take develop AI capable of describing relationships in pictures too much by the entire 5 years?  Perhaps this is what Elon Musk was referring to when he said AI progress is now moving "at a pace close to exponential":
The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fastit is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year time frame. 10 years at most. This is not a case of crying wolf about something I don't understand. 
I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen...
I've been reading and writing about the pitfalls of super-intelligent AI for many months, but my gut feeling has been that we probably have at least a decade or two to figure out some of the issues.  Now I'm wondering if I overestimated how long it will take by a decade or two...

Monday, October 27, 2014

More thoughts on implications of singularity for how we live our lives today

In my last post I asked, what implications should our awareness of the coming singularity have for how we live our lives today?  It's hard to imagine a change any bigger.  As far as implications for our hopes and dreams for the future, it is in the league of the second coming of Christ, or a meteor extinction event.  If I really believe it will happen in my lifetime (which is my sincere best guess) it seems like it should have a pretty profound impact on how I live my life today, and yet, I struggle to identify any shift in my day to day routines, or even my long term plans (i.e. saving and investing, family planning, etc.).

I've also frequently pondered and discussed with friends what a good singularity outcome would be.  My current opinion is that ideally "I" (and my loved ones) would survive and get to experience a transformation of our minds, but it is not clear how different that is from not surviving.  I'll explain.

In the scenario where I personally survive and get to make choices about how to evolve my consciousness, I would almost certainly choose to augment it.  Who wouldn't choose to be a little bit smarter; to have a slightly better memory; to have greater insight into their lives and the world around them; to be able to learn new skills and hobbies more quickly and deeply?  Every day I bump up against the limits of my mental capabilities.  If there were a safe way to improve my mind, I would do it.  I already make this choice daily.  For instance, one motivation for my daily exercise routine is because I feel that it improves the clarity and sharpness of my thinking.  Certainly, I would be more hesitant to change or augment the structure of my brain, but if, over time, its safety and effectiveness had been sufficiently demonstrated, then I wouldn't have any philosophical objection.

At first, however, I would probably be reluctant to make large modifications that would undermine my sense of having survived the improvement.  Going directly from me to a god like intelligence seems indistinguishable from dying.  The person I am now would not survive.  The god like intelligence would be a very different "person", with radically different purpose and perspective on the world.  So I might choose not to take that jump.  But a small jump that lets me perform better and make more progress on the tasks and agenda that I have right now wouldn't be that concerning (just like exercising every day to improve my mental clarity doesn't make me suspect that the couch potato I otherwise could be has committed suicide).

Nevertheless, if I continually chose to make small improvements to my intellect, over time it would have the cumulative effect of a fundamental transformation of my personality.  Many small changes over time eventually add up to very large changes.  The me I am today would be dead.

Of course that is already true to some degree.  I am not the same person I was twenty years ago, and I generally do not morn the death of my twenty years younger self.  The existence and continuity of the self can be thought of as largely an illusion.  Sense of self, and the drive for survival is a useful tool of evolution that facilitates the propagation of our genetic material, but upon closer inspection it has always been difficult to pin down a coherent philosophical justification for it.

But, since I am human, and I do suffer from (or revel in) the illusion of self, I would prefer a singularity where that illusion is not immediately and completely destroyed.  I would prefer the opportunity to gradually augment myself and experience a slow transition (and the awe of a rapidly growing understanding of the universe) rather than a sudden transition that destroys who I am today in a single moment.  I would prefer this, even knowing that the me of today will ultimately die in both scenarios.

Now consider a slightly different singularity scenario where the me of today is simply eliminated rather than evolved.  Instead of my consciousness being suddenly or gradually augmented, it is immediately extinguished and the matter of my body is recycled for use in the consciousness of someone else whose consciousnesses does get augmented.  In this scenario I end up dead just like I end up dead in the scenario of sudden transformation, so there isn't much to distinguish them.  Perhaps some trace of me would survive in the sudden augmentation scenario, but if I died today, some trace of me would continue to exist in the hearts and minds of my loved ones, but that provides me little comfort.  A faint trace of my existence is cold comfort for death.  So does it matter whether I "survive" the singularity?  In all the scenarios above (gradual transformation, sudden transformation, and simple elimination) I die--even in the gradual transformation scenario the me of today eventually dies.

I prefer the scenario of gradual transformation though it's hard to come up with a good justification apart from that it sounds fun and gratifying.  One might argue that with the gradual transformation I haven't died at all, just like I don't really feel like the me of twenty years ago is dead.  I'm changed, but there has been continuity of my personality throughout, and in fact, there are important elements of my personality that have not changed.  These constant elements of my personality, however, undermine the analogy.  Human biology places constraints on how much someone changes over the course of twenty years... a core personality generally survives.  There is no particular reason to suspect that a gradual transformation in the singularity, unlimited by the constraints of human biology, will leave any of my personality intact.  Thus, in the post singularity, the me of twenty years ago may be nearly 100% gone, with virtually none of my distinct personality surviving.  In that case, the only thing that a gradual transformation achieves is an exciting, interesting and wonderful death. Though to be fair, a good death is nothing to be scoffed at, especially since I've already said that I would chose that over living forever unchanged.

(One thing I might choose to do if I survived would be to appropriate the memories of the rest of humanity.  My own personal memories are certainly cherished and useful to me, but there is no reason everyone else's memories wouldn't also be valuable.  If I could acquire everyone's memories with a trivial expenditure of my resources I might as well do that.  But in that case it really makes no difference at all that "I" survived the singularity, because I then become an amalgamation of all humanity.  Similarly, even if I don't survive the singularity, if someone else does, and she incorporates my memories, then perhaps in the only way I could survive, I have survived.)

Interestingly, if the line of thinking in this blog post is valid, it really calls into question the enthusiasm of singularity proponents like Ray Kurzweil, who are hoping to achieve immortality in the singularity.  Immortality is an illusion, and Ray Kurzweil will die no matter what.  What he should be excited about is having an exciting and wonderful death.

What does all this have to do with how belief in the singularity should impact our day to day lives right now?  Well if you believe that the singularity is coming in your lifetime it implies that you will die and the human race will go extinct.  Even if humanity "survives" as the seeds of future intelligences, there is a good chance those intelligences will bear little resemblance to anything human.  So these are the last decades of human existence.  Whatever happens there probably will not be a human appreciation of the beautiful moments of existence.  We are a species moments away from extinction.  Exactly what impact that understanding has on an individual will probably vary significantly from person to person.  For me, it evokes a sense of love and expansiveness.  These are our final moments... lets make them our best.  Lets be kind to one another and see the beauty of each person's unique take on what it means to be human.  Lets revel in our own peculiar human appreciation of existence.  And lets work together to launch the next stage of intelligence with a purpose and motivation that we can be proud of.  This is our opportunity to leave our mark on the universe.  The next stage of intelligence can either be a monomaniacal chess playing robot or something else that is more deeply moving to our human sensibilities.

At base, what are those human sensibilities that I care about?  Kindness, love, curiosity, exploration, joy, wonder.  If that is the mark I want us to have on the universe, then, in these final moments of human existence, what better than to work towards embodying those attributes on a day to day basis?  Perhaps I would hope to do that regardless of my beliefs about the singularity, but I think it does make a difference.  Should I prioritize saving for retirement or taking time to do someone an extra kindness?  I think my beliefs about whether there will be a retirement make a big difference there. (Though perhaps that isn't the best example because I think there is a decent chance that I will just barely eke out a retirement before the singularity).

Tuesday, October 14, 2014

How should anticipated advances in AI change how we live right now?

Usually when we anticipate massive changes in the future it has a profound impact on how we are living right now.  Much of our daily lives are taken up preparing for the future: earning money, educating our children, exercising, preparing healthy foods, etc.  Each of those things has some immediate benefits, e.g. there is a certain amount of money I need right now to survive.  Only a small percentage is saved for next month, year, or retirement.  Similarly, educating my children brings some degree of immediate joy because I enjoy spending time with them and watching their joy at mastering new skills.  But a large fraction of educating my children has less to do with our immediate joy and more to do with planning for the future.  So, one would think, that a radically transformed expectation for the future would lead to some pretty significant changes to my daily life.  But for the most part it hasn't.  My daily routine (or depending on my mood, daily grind) is basically the same.

If I was solidly convinced that 20-30 years would bring the second coming of Christ, or a nuclear holocaust, or the collapse of civilization, or a social revolution that made property and money irrelevant, it would surely change my day-to-day priorities today.  The development of superintelligence is, in my estimation, without question an equivalent or greater change--a change that will have a more profound impact on what it means to be human, alive, me.  So why is my day-to-day mostly unchanged?

Part of the answer is probably that I'm not convinced about the timeline.  Perhaps the tipping point into superintelligence will come right after I die of old age instead of in my late 50s or 60s.  I don't want to be a pauper in my 70s and 80s.  Well... truth is I don't spend much time planning for retirement, but I do spend a lot of time trying to increase my wealth, in part because I think wealth might be an important factor in determining whether the singularity and the pre-singularity turn out well for me and my family.  So perhaps a large part of why my day-to-day remains the same is because my day-to-day without the expectation of the singularity is largely taken up with earning money, and it just so happens that earning money seems like it might be important even in light of the singularity.

So how should expectation of the singularity change how I live right now?

Here is one idea:  it should change what I'm teaching my kids.

What else?

Wednesday, July 23, 2014

Critique of Brynjolfsson and McAfee's The Second Machine Age

In January Erik Brynjolfsson and Andrew McAfee published The Second Machine Age as a follow up to their first book Race Against the Machine.  Both are excellent (though they say essentially the same thing, so if you read The Second Machine Age there is no need to read Race Against the Machine).  Since those books have been published there has been a growing discussion of how new AI technologies will effect employment, and what we should do about it.  While I think the books do a great job of laying out the problem, and have promoted more discussion about these important issues, I was struck by how inadequate their policy recommendations seem to be.

The Problem

The books do a great job of illustrating how machines are rapidly encroaching on cognitive labor.  Whereas the first machine age was characterized by machines replacing human (and other animal) physical labor, the second machine age, which we are experiencing right now, is characterized by machines replacing human cognitive labor.  They give many examples (self driving cars, etc.) that really drive the point home.

If we understand human work to be composed of essentially two parts, physical work and cognitive work, then we should expect the consequences of the second machine age to be radically different from the consequences of the first.  As machines replaced physical labor, the human workforce merely moved to jobs that were inseparable from cognition and thus beyond the capacity of machines.  The type of jobs changed, but over the long term, the number of jobs was still adequate.  As machines replace cognitive labor, however, there is no other type of job for the human workforce to move to.  Machines won't replace all cognitive jobs at once, since their cognitive abilities will ramp up over time.  However, as machines become more and more cognitively capable they will squeeze humans into a smaller and smaller niche of remaining jobs.

Eventually, if machines become as cognitively capable as humans, there will be no work left which cannot be done more cost effectively by a machine.  But even before that point, unless we find some remedy, there will be massive unemployment as machines replace humans faster than we can reallocate humans to the types of cognitive jobs which machines have not yet mastered.  So what do we do about this?

Policy Recommendations of The Second Machine Age

The policy recommendations of The Second Machine Age boil down to a focus on education (MOOCs, Kahn Academy, etc., and higher salaries for teachers), entreprenuerism (immigration reform and cutting red tape), public science funding, and public infrastructure updates. Brynjolfsson and McAfee propose paying for these initiatives with taxes on activities with negative externalities and on economic rents.

Importantly, they reject any fundamental adjustments to capitalism:
We are also skeptical of efforts to come up with fundamental alternatives to capitalism. By ‘capitalism’ here, we mean a decentralized economic system of production and exchange in which most of the means of production are in private hands (as opposed to belonging to the government), where most exchange is voluntary (no one can force you to sign a contract against your will), and where most goods have prices that vary based on relative supply and demand instead of being fixed by a central authority. All of these features exist in most economies around the world today. Many are even in place in today’s China, which is still officially communist. 
These features are so widespread because they work so well. Capitalism allocates resources, generates innovation, rewards effort, and builds affluence with high efficiency, and these are extraordinarily important things to do well in a society. As a system capitalism is not perfect, but it’s far better than the alternatives. Winston Churchill said that, “Democracy is the worst form of government except for all those others that have been tried.” We believe the same about capitalism.
They also recommend against reconsideration of the basic income ("Will we need to revisit the idea of a basic income in the decades to come? Maybe, but it’s not our first choice.") because work is beneficial to human happiness. Instead they advocate a negative income tax that augments the incomes of the working poor. Presumably, the notion is that computers will not be ready to take over all human labor even in the "long run", so better to keep humans working alongside of computers for the foreseeable future (and they give a bunch of current examples of humans working with machines out-competing either working alone).

Finally they list some "Wild Ideas" that they don't endorse, but seem worth further consideration: public mutual funds providing inalienable income to citizens (maybe not that different from a basic income), incentives to develop human augmenting rather than human replacing tech, setting aside certain categories of work for humans, using vouchers to create a minimum standard of living and a more massive public infrastructure campaign.

Inadequacy of Their Policy Recommendations

Brynjolfsson and McAfee do a great job in the first two thirds of the book, but when it comes to policy recommendation, they seem to miss their own point. While their policy recommendations are not terrible general recommendations for economic prosperity, they are potentially terrible distractions when discussed in the context of machines rapidly replacing humans throughout every sector of our economy.

The important point, which they make so well, is that whereas the first machine age was about machines replacing human physical labor, the second machine age is about machines replacing human cognitive labor. Replacing physical labor simply meant that human labor had to be reallocated to cognitive labor, but as machines replace human cognitive labor there is nothing to reallocate them to, except on a temporary basis to whatever particular types of cognitive labor that machines have yet to master.

Improved education, entreprenuerism, public science funding, and public infrastructure are great general recommendations.  They are particularly good recommendations for the first machine age where the challenge was smoothly reallocating human labor to a different category of work.  But how do they address the particular problems of the second machine age where the set of cognitive tasks where humans still have a comparative advantage is constantly dwindling?  Where we can only guess at what major employment sector will evaporate over the space of just a few years (e.g. driving)?  And where, ultimately, there is no reason to expect there to be any tasks where humans have a comparative advantage?

Not only are their policy recommendations utterly inadequate to address the problems they set forth, but then they prematurely limit the discussion of policies that might actually address the problems to only those that do not "fundamentally" adjust capitalism.

Huh?!?  Machines are rapidly replacing all human labor, but lets not change our economic system? Hasn't our economic system been, at heart, a means for allocating, organizing, augmenting and incentivizing human labor?  How can it be that human labor will be rapidly disappearing, squeezed into a smaller and smaller niche by machines, and yet no big changes to our economic system are called for?

Particularly, at the point where machines have replaced virtually all human labor, will it still make sense to vest control over most of earth's productive capital with an elite capitalist class?  What important contribution will capitalists be making when they are already employing machines to make the important decisions about how to most effectively allocate, preserve and expand their productive capital?  At that point can't we just get rid of the capitalists and have a democratically controlled government give direction to the machines?

And if that is our ultimate destination, shouldn't the policies we adopt now be aimed at smoothing our transition to that destination, and ensuring that we can successfully navigate there at all?  (If so, it would seem the most important thing we can be doing right now is strengthening and safeguarding our democracy, perhaps by addressing economic inequality much more forcefully.)

But, Brynjolfsson and McAfee do not answer any of these questions in their books.  They simply say, in effect, "lets not have any big changes to our economic system, even though we are facing the elimination of HUMAN LABOR... surely that doesn't merit a rethinking of the system."

Since the rest of their The Second Machine Age is quite excellent, I am curious about their answer to this critique.  Perhaps they see the transition as taking centuries rather than decades.  In that case it might be more important to focus on maximizing human productivity, leaving less pressing questions regarding how to transform our economy for later generations.  Or, perhaps they do not anticipate that machines will replace humans, but that humans and machines will ultimately merge.  In that case, the issue of machines replacing humans may not be relevant in the long run.  Or, perhaps they think that advances in AI will hit a wall or plateau.  In that case the primary concern is just making reallocating humans to whatever jobs will long remain out of reach of machines.

 Who knows.  They don't say.