This comment on this post asked if my argument meant that AI is impossible:
I don't think that follows from my argument. The greatest challenge with AI is that we have no good theories on the non-material representations comprising consciousness. Brain mechanics we're beginning to get a handle on, and you'll find a lot of people who agree that consciousness is an emergent property of a number of largely independent sub-systems, but there is no compelling theory that says "To get to AI, we need to start here, and then go here, and then go here..." There are appeals to emotion -- Doug Lenat's Cycorp says "It's just common sense..." that a massive database of facts is necessary while MIT's Rodney Brooks say that Kismet-style "emotional robots" are the best route -- but I could just as easily argue that the problems of internal representation, or language are the first step.
As a matter of fact, I do believe that language is the key -- once we have a system that can reliably interpret Web pages (say), I think that it will be a small step to a system that can generate them and, in my opinion, that will bring us into the gray area of "maybe we have AI and maybe we don't." For instance, the algorithms that produce Google News have a surprising penchant for cricket -- neither the world's most popular sport nor the world's most written-about sport. It's quirky -- and that's a very interesting thing to say about a program.