Exploring Flexible Multimodal Instruction of Intelligent Systems via Natural Language and Sketching


Sponsor: Defense Advanced Research Projects Agency

Principal Investigator: Kenneth D. Forbus and Tom Hinrichs

Project Summary: The purpose of this study is to explore how to achieve flexible multimodal instruction of intelligent systems via natural language and sketching. A major limitation of today's intelligent systems is that they require programming to adapt them to new domains, which is expensive and time-consuming. Being able to teach intelligent systems via natural language and sketching, the way we teach new personnel, would be a radical step towards more flexible, autonomous intelligent systems. This study will explore the issues involved in using a combination of language and sketching to instruct intelligent systems. This will involve developing a catalog of spatial semantic depiction conventions which capture broadly-applicable regularities in how people interpret diagrams, and a set of multimodal dialogue strategies which will enable intelligent systems to participate in human-like conversations involving natural language and sketching. To ensure the generality of our results, we will use two domains. The first, sketched games, will enable us to explore how to communicate domain rules, strategies, and tactics. The second, everyday causal reasoning, will enable us to explore how to communicate causal models that support common-sense inferences about the physical world, which are relevant to a wide range of task areas. We will use our Companion cognitive architecture, which already incorporates natural language capabilities that have been used for learning by reading, and CogSketch, our sketch understanding system which is being used for cognitive modeling and for education.

Selected publications: None yet.


Back to Projects page | Back to QRG Home Page