Casey Chesnut, who’s my favorite Tablet PC programmer because he does all this stuff apparently without realizing that it’s supposed to be hard, has written a neural network based character recognizer for the Tablet PC ([via]{style="FONT-STYLE: italic"} [Tech Blender]). He normalizes an ink stroke in x,y,and t (time) (a technique I discussed in two recent DevX articles) and quantizes it into a discrete number of inputs (50) for the NN.
That's actually fairly close to my understanding of how Microsoft's recognizer uses neural nets as well, although for continuous writing you obviously have to deal with letter pairs or even triples (I would think) and use some kind of sliding window across the input.
Post-neural net, you have an activation level per character, which you can feed into a Markov model of letter pairs and triples (if the last letter was a 'q' then the odds of this being a 'u'...). You then feed your letter-based options into a dictionary, which in turn you feed to a language model (the simplest being Markov models again).
Or post-neural net, you move directly to a BNF-like grammar.
We gotta’ get Casey to Windows Anywhere...