Dear Mr. Wagner,
While I found your presentation of Quixote compelling and engaging, I am hesitant to fully support your claim of its "potential in the enterprise as well as the consumer world". Undoubtedly, the ability to teach artificially intelligent agents how to complete tasks with increased flexibility would redefine the role of these machines in our lives. However, in considering the limitations of this approach, I find myself doubting the potential for impact outlined in your article. From a technical perspective, it appears that a system trained under Quixote would suffer from a machine learning phenomenon known as overfitting, wherein the program learns to replicate the input-output relationships it is trained on, but fails to generalize the "lessons" learned from training to new situations-- a key feature of human problem solving. I would be surprised, for instance, if the Quixote model could generalize instructions for "pick up my prescription" to "fulfill this lunch order", although they both follow common paths (go to the location, find the item, make the purchase, and deliver the item). Indeed, the success of these programs appears entirely contingent upon their ability to abstract specific commands into high-level goals and concepts, an ability apparent in humans but not in the technological state-of-the-art. This shortcoming may result in the system learning symptoms of behavior instead of causes-- and while humans may learn by hearing stories, it is our ability to generalize beyond the tales of our childhood which allows us to reason in the face of uncertainty.
Even if machines could learn how a human may normally act, their impact on the enterprise and consumer fields might still be limited. The power of human behavior lies not in its adherence to rules, but in its ability to adapt to deviations from the plan. This fundamental pillar of human cognition remains woefully absent in our artificially intelligent counterparts: machines trained with a Quixote-like approach may replicate patterns of rules, but learning when to abandon one plan and adopt another may be impossible if that deviation never appeared in the stories used to train the program. That is not to say that this new generation of machines has no value-- the progress put forth by Quixote in allowing for natural language input has fantastic potential. That the common man or small business, for example, could communicate with an AI system without the need for "somebody with expertise to set these systems up" is indeed revolutionary. But if that communication fails to manifest in meaningful behavior, it becomes difficult to argue for the impact of the technology as a whole. Although I welcome an AI revolution and envision a future in which artificial intelligence augments our everyday experiences, I remain skeptical of claims of significant progress in this domain. Fundamental hurdles must be cleared not only in the realm of computer programming, but also in the field of cognitive psychology before significant improvements can be made. I look forward to hearing your thoughts on the matter and thank you again for your presentation of the technology.