Saturday, July 12, 2014

Further correspondence with Kevin Kelly regarding Pinnacle-ism vs Thinkism

A couple of days ago I posted a response to Kevin Kelly's thinkism article.  He responded and the exchange below ensued.

Kelly's response to my last post:

I am not a pinnacle-ist by any stretch. I think we had not even begun to create the variety and quality of artificial minds that are possible. In 100 years we'll look back and say we had not yet make any kind of AI by 2014. But unlike you, I don't think we can create these super smart minds by merely thinking about them. We need to invent the chips and software, and that requires making new real things, making mistakes, and trying stuff. We have to continue experimenting with human and mammalian brains to discover how they work. We can't just think about human minds, which is what you would prefer to do. We lack data, not just ideas. All this experimentation, investigation, trials, dead ends takes time. Each probe takes time. Evolution spent several million years working around the clock to experiment. Of course we will greatly accelerate this process, as we already have done in the last 100 years. Through technology we've accelerated evolution, particularly in the last 20 years. But interestingly, our IQ has not changed in the last 20 years. We used other kinds of technology than IQ to speed things up. The evidence is that increasing our own intelligence was not needed to create some artificial mind-like things. Will we need increased intelligence to make an intelligence smarter than us? And will intelligence compoundly accelerate. Those are unknown.

Increased intelligence can help make super intelligence, but unlike think-ists, I don't see any evidence it will be sufficient alone. Thinkism is the primitive idea that the only thing you need to invent new things, to make breakthroughs, to bootstrap into a new level is a higher IQ. High IQ is necessary but not sufficient to accomplish science and technology. But without the time-consuming work of testing, gathering data in real life (which works in real time), building complicated apparatus which never work the first time, trying many things which don't work at all, IQ alone is not going to get you progress. At least that's the evidence so far. Can we invent a type of science that does not require gathering data? Possible. But the evidence so far is the opposite. Lately we've been doing a kind of science that has less theory and only data collection, so a data-less science remains a possibility without evidence.

We can accelerate data collection (as we have) and we will accelerate the creation of artificial intelligence (as we have) but increased IQ is only a small part of the innovations needed to accelerate it further. Thinkism is the crazy idea that higher IQ will solve all problems.


My Response:

I never said or implied that experimentation will be unnecessary for smarter-than-human AIs, nor do I know of any well known Singularity thinkers (Kurzweil, Yudkosky, etc.) who've said anything that implies that.

Their point, which I agree with, is that experimentation will be accelerated to such an extreme degree that it appears to be a difference in kind. In my response to your thinkism article, I emphasized that AIs will still need to experiment by likening the Singularity to the difference between chimps and humans. Both humans and chimps experiment with their environments, but humans do it so much better that when you compare them, chimps appear to be making virtually no progress. Despite the fact that chimps do in fact learn new concepts and technologies and culturally propagate them, humans have moved so much faster that chimps appear to be at a nearly complete standstill.

The superior intelligence of humans allows them to skip over thousands of iterations of experiments that chimps need to do when, for instance, solving a puzzle of how to remove a banana from a box. Whereas the human solves that puzzle nearly instantly with a single experiment (and moves on to experiment on more challenging problems) the chimp spends all day puzzling about and poking the box, engaging in thousands of little experiments before mastering the puzzle.

Similarly, smarter-than-human AIs will still need to experiment, but their experiments and their thoughts about the results will be so much more sophisticated than humans, that in comparison human science will appear to have been at a near standstill.

I coined the word "pinnacle-ism" to respond to your "thinkism", because your intuition seems to be that a similar increases in intelligence (as that from chimp to human) will not result in similar improvements in experimental efficiency. The only way I can really imagine this to be the case is if somehow humans have reached some sort of experimental pinnacle, past which profoundly greater intelligence will no longer result in profoundly better experiments and profoundly improved insights gained from those experiments.

In your thinkism article you say:
There is no doubt that a super AI can accelerate the process of science, as even non-AI computation has already speed it up. But the slow metabolism of a cell (which is what we are trying to augment) cannot be sped up. If we want to know what happens to subatomic particles, we can’t just think about them. We have to build very large, very complex, very tricky physical structures to find out. Even if the smartest physicists were 1,000 smarter than they are now, without a Collider, they will know nothing new. Sure, we can make a computer simulation of an atom or cell (and will someday). We can speed up this simulations many factors, but the testing, vetting and proving of those models also has to take place in calendar time to match the rate of their targets
But, both humans and chimps need to experiment with the world in real-time and yet we do see a Singularity-like difference in the move from chimp to human intelligence. Humans also need to wait for the results of experiments, and yet our experimental efficiency is so much greater than chimps that it appears to be a different thing entirely. Why? Because a human can integrate vastly more information, construct more sophisticated models that account for that information, identify some important model variations, and envision experiments that more effectively cut through the noise to find the most promising variation. A human skips more than 99% of the experiments that a chimp must iterate through before mastering a puzzle. (Not to mention the fact that for most problems, a chimp is simply incapable of conceiving of the proper models.)

Your claim is that similar increases in intelligence will not result in similar increases in experimental efficiency. Why, apart from pinnacle-ism, would that be the case?

Kelly's Response:

My guess: We are no-where near any pinnacle. The acceleration of intelligence will continue but it will be much much slower than the Singularitans believe; this difference will be significant, a matter of kind, not degree.

My Response:

So you think it will be some (undefined) degree slower than the Singularitans believe. Can't really argue with that. Who knows...

I do agree that Singularitan's tend to be overly optimistic. Particularly the Abundance crowd (e.g. Diamndis), who overlook the fact that material wealth has been rapidly increasing for centuries, but human consumption has always increased fast enough to leave many people wanting. I don't believe in the post-scarcity singularity. That is a fantasy that has never really been justified with any compelling theory.

Kelly's Response:

Whenever possible I prefer to base my expectations on data. I have been seeking evidence that artificial intelligence follows Moore's Law in the last 50 years but have found none. The inputs -- processor speed, number of transistors, links, you name it -- have all been increasing exponentially, but there is no evidence the output -- the IQ of machine learning-- has. Artificial smartness is increasing linearly -- at best. And I see no evidence of it becoming exponential in the future either. If you have evidence of such, that could change my mind.

My Response:

Hmm... hard to say given that we are still so far away from anything resembling intelligent computers (and regardless, we don't have any good metric of general intelligence). If cutting edge AI is currently about .001 as smart as a person, and it doubles to .002, I'm not sure we would even notice the difference. Twice as smart as really stupid is still really stupid.

The only example I can think of is the one I keep citing from the biological world: a 3 times difference in brain mass between chimps and humans has resulted in what seems like a much greater than 3 times difference in intelligence (at least when measure in terms of technological achievement).

I imagine the intelligence curve would have increasing returns to computational power over some portions, decreasing over others, and roughly constant over others. I don't know of any principled way to know where humans are on the curve, but given that a 3 times increase in brain mass has resulted in the difference between chimps and us, I don't see any reason to think we have reached a portion of the curve that has severely diminishing returns to computational power.

Kelly's Response:

Don't confuse brain mass with IQ. It doesn't tell you much. Whales have brains about 4 times as large as ours. They have more cortical convolutions, too. Neanderthal brains were larger than ours. The computers with the most chips, FLOPS, or neurons won't necessarily be the smartest. You overestimate the importance of chimps vs humans.

My Response:

Actually I was simplifying by saying brain mass because most people don't know what encephalization quotient is (see http://en.wikipedia.org/wiki/Encephalization_quotient). When measuring by encephalization quotient whales don't have very big brains (dolphins do, but still much smaller than humans).

Anyway, I agree it's not all about size, but there is a very strong relationship between brain size and intelligence in animals.

But, by no means was I confusing brain size with IQ. As I said, I expect that if you were to express the relationship between brain size (computational power) and IQ (intelligence) as a curve, there would be regions of diminishing returns, increasing returns and constant returns to brain size. So obviously the two are not synonymous.

I may be overestimating the importantance of chimps vs humans, but unfortunately, we don't really have any other examples of intelligence, or the development of intelligence. Since human and animal intelligence are the only things that we know of that exhibit real intelligence, I think it is useful to look at the development of human intelligence as a guide. I prefer to use the little data that we do have rather than not using any data at all.

See the previous two parts of this exchange here and here.