Not wanting to hassle with learning OpenCV and fighting with an edit-compile-execute environment, I decided to use my OpenCV project as an excuse to play around with Python.
I'm still a serious beginner, but I'm beginning to understand why it gets the use it does.
Anyhow, it only took a couple of days to integrate Tesseract OCR, PIL, and OpenCV such that I could open multi-frame TIFF images, perform some basic feature detection, and then use the output of feature detection to focus on a specific region for OCR.
I will admit to having a few false starts. The first was that I used an older (C++) tutorial that was using some deprecated features of OpenCV and ignoring some other features. For example, the tutorial was using Hough Line detection to find squares on a printed page. In order to get to that point there was thresholding, dilating, eroding, inversion, flood filling and so on. Even then I wasn't getting the correct results.
I'm pretty impressed with Microsoft's System.Speech API. It took less than 3 days to throw together a proof-of-concept application. The hardest part was probably coming up with the grammar -- documentation for that is pretty thin on the ground.