I recently attended The Future of Technology: Benefits and Risks, which was the inaugural event of the Future of Life Institute. I was glad to see more evidence that these issues are being taken seriously by high profile thinkers. The event consisted of presentations by each of six panelists, and then a discussion moderated by Alan Alda. For anyone who has already been following this area of thought, there wasn't much in the way of new ideas presented, but the panelists were able to draw on interesting examples from their own fields. The panelist's intuitions about the most pressing threats seemed to differ substantially from one another, and I got the impression that they are still at a very early stage in their thinking about these issues.
For me the most interesting part was my brief discussion with Andrew McAfee (author of Second Machine Age and Race Against the Machine) after the formal discussion period ended. The idea of post scarcity economics came up and I wanted to get his reaction to my analysis implying that the notion of post scarcity economics is actually a critical conceptual mistake among singularity thinkers. After a brief back and forth where he alluded to past mistaken predictions of resource constraints such as those by Malthus, McAfee exclaimed "I'm not concerned about resource constraints", and it was obvious to me that he meant that to end the conversation so he could answer other people's questions.
Admittedly, it's hard to summarize and understand a complex idea in just a few minutes, especially in a chaotic environment after a presentation where dozens of people are approaching panelists to ask follow up questions. So perhaps he has more interesting things to say on the topic. Nevertheless, his response reinforced my sense that there is an important blindness among thinkers in this area.
His apparent blindness is all the more striking given the topic of his two books with Erik Brynjolfsson which are in large part explorations of how technological changes are increasing inequality and will continue to do so unless we adopt policies to address those tendencies. As he says, he is a "technology optimist", and I guess part of what that means for him is that he does not believe in long run resource constraints.
I think this may be related to my other criticism of his books. For him, dealing with increasing inequality in the face of rapid technological change, is about adopting sensible policies within our existing economic system. For instance, he says "We are also skeptical of efforts to come up with fundamental alternatives to capitalism. By ‘capitalism’ here, we mean a decentralized economic system of production and exchange in which most of the means of production are in private hands..." While I agree with him about some of the virtues of capitalism, it is hard for me to imagine a sensible economic system that is still recognizable as capitalism in a world where robots are performing virtually all labor and humans are by and large unemployed. I plan to write more about that in subsequent posts.
It seems that McAfee imagines that with some policy tweaks to capitalism we can solve the issues that arise when computers replace humans in the work force and capital becomes the only factor of production. If we merely implement those tweaks effectively we can overcome the obvious tendency for capital accumulation to lead to radical inequality in a post-labor economy and thereby make it to a post-scarcity utopia, which will somehow still be capitalist. This seems pretty far fetched to me. Why should we organize our economic system around private ownership of productive capital in a world where computers are superior to humans for nearly all productive work--including determining the most efficient ways to invest productive capital?
I think part of the reasoning for his position must have to do with a focus on the pre-singularity. He is concerned with the period of time where humans retain an edge in some subset of productive activities. Or perhaps he is assuming a pre-singularity merger of humans and computers such that the distinction between what computers can do and what people can do becomes less meaningful. Perhaps he believes that if we are ultimately headed towards a world that loses the distinction between human and computer work, then it continues to make sense to have humans using their self interested judgement for how to invest their productive capital, for all the same reasons that we think capitalism is the least bad system right now. Who knows. He doesn't say in his books. He just says, lets keep capitalism because past efforts to do something better have failed... even though many of the factors that make capitalism effective are set to radically change. That is a profoundly unconvincing position.
Perhaps if I got to talk to a him a little longer he would have been able to fill out his vision in a more compelling manner (though I don't think he does that in either of his books on the subject).