Reasoning for Social Autonomous Agents


Sponsor: Computational Cognition and Machine Intelligence Program, Air Force Office of Scientific Research

Principal Investigator: Kenneth D. Forbus and Tom Hinrichs

Project Summary: The problems facing the nation are growing ever more complex, while the cognitive capabilities of people remain essentially unchanged. Today's AI/ML software often becomes part of the problem, frequently brittle, uninspectable, and requiring considerable expertise and data to create and maintain. The Companion cognitive architecture (Forbus 2016; Forbus & Hinrichs, 2017) is exploring a radically different model. Our goal is to understand how to create software social organisms, systems that adapt to us, learning over time and becoming more effective partners. This project will take a large step towards that goal by exploring the reasoning capabilities needed for software social organisms. This research involves three lines of investigation: :

  1. We will explore social reasoning, including norms (e.g. information sharing) and legal concepts (e.g. torts). Social reasoning is also performed over organizations, treating them as agents, as is done for example in intelligence analysis and business analysis.
  2. We will explore cognitive control, the signals and methods used in maintaining an agent's internal environment and guiding its own reasoning and learning. These signals range from evaluations within a task (e.g. the click of a successful solution, the tip of the tongue phenomenon) to signals used to help decide to switch tasks and manage learning goals (e.g. surprise, curiosity, frustration, and boredom). We plan to experiment with ways to implement such signals and how they should be used to manage reasoning and learning, to make agents more autonomous.
  3. We will explore broad reasoning, the ability to handle open-ended question with little problem-specific information. We plan to explore how analogical reasoning from experience and commonly used human heuristics can be implemented in a knowledgerich cognitive architecture. Furthermore, we plan to explore the computational basis for the kinds of cognitive illusions that people, even highly trained analysts, exhibit. This could provide the foundation for creating systems that are less susceptible or even immune to those illusions, providing a valuable complement to human reasoning.

We will build upon our Companion cognitive architecture for this research. We will also collaborate with AFRL's ACT3, both helping transfer our existing and any newly developed technology to them, and as a source of relevant problems and data that we can use in our research.

Selected publications:

  1. Olson, T., & Forbus, K. (under review). Learning Norms via Natural Language Teachings.


Back to Projects page | Back to QRG Home Page