We have to be very quick to ask if the astonishment we feel at Watson's performance on Jeopardy is a projection. Eliza, the computerized Rogerian psychoanalyst does little more than slightly disguise the question "why do you feel that?" -- a question that people are so delighted to answer that they overlook the mechanical blankness generating it.
Is Watson just a 21st century Eliza?
At a certain level we know the answer is "yes." We know that Watson is not the product of a grand theory of consciousness. We know that Watson is essentially a (very impressive) parser backed by a blackboard system that uses various known techniques working against a (very capacious) database of facts and frames.
We also know that the capacious database of facts that Watson refers to could not have been hand-crafted. There has simply not been time for people to type in the names of every Olympic athlete, their results, their unusual physical characteristics, etc. We know that Watson's answers in that category, whether correct or incorrect, were based on facts that it read and processed on its own (or at least largely on its own). That's astonishing.
We can also be certain that Watson is capable of inference. Given the facts "All men are mortal" and "Socrates is a man" we can be certain that Watson would be able to conclude correctly that "Socrates is mortal" (or, in Jeopardy form, "What is Socrates is mortal?").
Although we're sure that "Socrates is a man" is the type of fact that Watson can extract from unstructured input, what about "All men are mortal"? Can Watson infer that rule on its own or does that have to be hand-crafted by a programmer and spoon-fed to Watson?
Everything I know about the field of AI makes me think that just as there has not been enough time to hand-craft every fact fed into Watson, so too has there not been enough time to hand-craft enough higher-order associative logic to allow Watson to perform as well as it has demonstrated. (The Cyc Project has essentially been hand-crafting such logic for 20 years, with little sign of progress -- but perhaps the Cyc database plays a role in Watson?)
Or can Watson induce "All men are mortal" from the facts that it has absorbed? Can it draw (tentative, statistically hedged) conclusions? If that is the case, then it seems logical that Watson's learning can continue in an autonomous or semi-autonomous way. Such an inflection point in learning has long been seen as the critical moment in the generation of intelligence, whether in humans or, presumably, machines.