Here are two criticisms of the Singularity that are more interesting than snidely assuming (without explanation) that we'll never be able to create machines as intelligent as humans.
In the first, the author Kevin Kelly supposes that even if we have super intelligent machines, it probably won't change very much because evolution of technology does not depend simply on intelligence, but also requires experimentation. Even if we can speed up the pace of analysis of the results of experiments, we will still need to conduct the experiments. In some cases, such as particle physics, conducting those experiments requires enormous construction projections. If we simply cut out all the time between experiments that is currently occupied by human thought we would have a faster pace, but not that much faster.
On the surface the point seems vaguely plausible, but on deeper reflection, I think it's almost certainly wrong. First of all, one implication of smarter-than-human machine intelligence is that labor will be drastically less expensive, particularly the highest levels of cognitive labor. Building a super collider will be much less expensive and much faster when you don't have to pay a single human and instead have an army of unpaid robots working 24/7 on construction and design. Another implication is that investigation will be much more parallelized. Creating minds that can do PhD level analysis will not require 30 years of schooling. There will be more expert level intelligence and experimentation applied to more different paths of inquiry all simultaneously.
But the more important point is that intelligence is probably a substantial research bottleneck right now in ways that are impossible for us to comprehend. The human brain is only 3 times larger than a chimp brain, but that 3 times difference in size results in a nearly infinite difference in capability. A 3 times difference in brain matter is the difference between sitting in the jungle eating bananas and achieving all the greatest and smallest feats of human civilization. The most simple concepts that we take for granted are inherently beyond the reach of a chimp, which will never learn to read or write, do simple math, or program a computer, much less fly to the moon or build a skyscraper.
Saying that a computer with 3 times the capacity of the human brain is unlikely to make technological progress much faster than a human because it will still need time to carry out experiments, is like saying humans are unlikely to make technological progress much faster than chimps because we still need time to carry out experiments. But this is obviously false because a chimp cannot conceive of and carry out the same kind of experiments, and even if it could it would have no idea what to make of the results.
Furthermore, we aren't talking about a computer with 3 times the capacity of a human. We are talking about computers with a thousand (ten years after parity), million (twenty years after), billion (thirty years after) or trillion (forty years after) times the capacity of a human.
Thus, when you think about it in a little more detail, to presume that somehow human technological progress is moving nearly as fast as possible because a greater intelligence would still have to carry out experiments is absurd. And we aren't merely talking about a doubling or tripling of pace. A chimp could never build a sky scrapper no matter how many millennium it had to learn. Nor are we talking about an increase in progress comparable to the difference between humans and chimps which corresponds to only a 3 times difference in brain matter. We are talking about a difference in pace that is so vast it is essentially inconceivable.
So while I think the thinkism argument is interesting, I think it is almost certainly completely wrong.
The second criticism I think is more compelling. Here Kelly is not criticizing the Singularity, but rather the likely timeline for its occurrence, noting that Singularity prognosticators curiously pick dates right around the time when they would be likely to die, thus perhaps providing them some comfort that it will arrive just in time to save their lives. I don't really think there is a good response to this apart from, lets wait and see. Nobody knows how long it will take to replicate human intelligence in machines. Apart from saying that it is possible and will probably take less than 15,000 years, the best we can do is speculate based on our intuitions about the nature of intelligence and the progress of technology. My speculations are more in line with Kurzweil's than with those that think we are centuries away. Is that just wishful thinking, strongly influenced by the hope of immortality? Perhaps. But it is also my best guess. From all the reading and thinking I've done on the subject, and from my decades of following the evolution of technology, that is my gut estimate. Unfortunately, it is impossible to remove the influence of hopefulness (whatever that may be) on my estimate.
I do think it's worth noting however that the same thing can also work in reverse. There are people whose attachment to the uniqueness and divinity of human cognition leads them to estimate that the Singularity is very far away. There are people whose political agendas depend on a distant estimate for the arrival of the singularity (e.g. if it is coming in thirty years likes Kurzweil thinks, climate change probably shouldn't be quite as big a concern).
We all have our biases. When I sit around and think about the most likely time for the Singularity, it seems about thirty to fifty years away, even if I'm trying really hard not to be influenced by my own hopes for immortality. What else can I do? We'll just have to wait and see.
See Kevin Kelly's response to this post here and the ensuing discussion between the two of us here.