Microsoft is running a contest called "Made in Express" that will pay out \$10,000 to the best application written in Microsoft's entry-level version of Visual Studio. The proposals chosen as finalists are, to put it kindly, ambitious: an AI psychotherapist (assuming that the person doesn't just cut-and-paste Eliza); an autonomous robot capable of traversing rough terrain; a poker bot simulator with drag-and-drop AI. There are others that are more feasible (the one I like is the "absurd comparisons" calculator: how many hummingbirds weigh as much as a blue whale?).
Lately, I've been thinking about how people learn programming, and it's very interesting to note how these programmers (who know enough about programming to find and enter the contest) have, let's say "unrealistic" understandings of AI and machine sensing. If you think about it, it's not absurd for someone to say "Hey, I hear Tablet PCs understand handwriting, I hear speech recognition is almost there, therefore, how hard could it be to include '3D stereo vision' in my contest entry?" Except, sadly, it is absurd.
Once upon a time, I did AI "technology transfer" work: basically, translating LISP-based algorithms into (primarily) C and C++-based algorithms (it's one reason I believe Sapir-Whorf applies to programming languages). I haven't in a long time, but I try to track development and I have a fair understanding of progress in the field. We are so friggin' far from a computational model of mental states that it's laughable (alternately, it's humbling).
AI is decades behind genetics in terms of a model. At least we know how DNA codes. And, until recently, we believed that there was something approaching a 1:1 correspondence between genes and functions. Until we starting sequencing genomes a few years ago and discovered that there are many, many fewer genes in a genome than there are traits in an organism. Even with DNA-based genetics, which is an incredible natural computational abstraction, it turns out that reality is complex. Even chaotic in the sense of depending on environmental starting conditions. (The idea that "well, the genome didn't explain as much as we thought, but the proteome (our protein complement) will," is equally hubristic and short-sighted.)
And just as we can assemble certain helpful drugs based on biologics, even with our incredibly limited understanding of how those biologics came to be, we can compute certain helpful functions (handwriting, speech) with a vastly more limited understanding how we do it natively. But at least with DNA we have a (limited) model of how things move from small to large. With cognition and perception, we have nothing even close, just very, very general theories.
For instance, there seems to be widespread agreement that "minds are what brains do." Further, I think most researchers would probably more-or-less agree that mental states are the work of specialized, individually non-accessible subsystems working in parallel, whose complex (and chaotic) interactions construct higher-level systems. I hope that's enough for that guy to create the Web-based psychotherapist.
Then we have this mechanistic understanding of neurons which, just like DNA, have these features which can be modeled computationally. And a lot of fascinating research is coming from functional imaging, so we're beginning to understand the physical structures associated with mental states. But, I guarantee you, the way that mental states arise from neurons will turn out to be far, far more messy and complex and contingent and filled with feedback loops and dependent on environmental conditions than the way living organisms arise from DNA.
And don't even get me started on consciousness.