Companion Cognitive Systems
We are developing Companion Cognitive Systems, a new architecture for software that can effectively be treated as a collaborator. Here is our vision: Companions will help their users work through complex arguments, automatically retrieving relevant precedents, providing cautions and counter-indications as well as supporting evidence. Companions will be capable of effective operation for weeks and months at a time, assimilating new information, generating and maintaining scenarios and predictions. Companions will continually adapt and learn, about the domains they are working in, their users, and themselves.
The ideas we are using to achieve this vision include:
Analogical learning and reasoning: Our working hypothesis is that the flexibility and breadth of human common sense reasoning and learning arises from analogical reasoning and learning from experience. Within-domain analogies provide rapid, robust predictions. Analogies between domains can yield deep new insights and facilitate learning from instruction. First-principles reasoning emerges slowly, as generalizations created from examples incrementally via analogical comparisons. This hypothesis suggests a very different approach to building robust cognitive software than is typically proposed. Reasoning and learning by analogy are central, rather than exotic operations undertaken only rarely. Accumulating and refining examples becomes central to building systems that can learn and adapt. Our cognitive simulations of analogical processing (SME for analogical matching, MAC/FAC for similarity-based retrieval, and SEQL for generalization) form the core components for learning and reasoning.
Distributed agent architecture: Companions will require a combination of intense interaction, deep reasoning, and continuous learning. We believe that we can achieve this by using a distributed agent architecture, hosted on cluster computers, to provide task-level parallelism. The particular distributed agent architecture we are using evolved from our RoboTA distributed coaching system, which uses KQML as a communication medium between agents. A Companion will be made up of a collection of agents, spread across the CPUs of a cluster. We are assuming roughly ten CPUs per Companion, so that, for instance, analogical retrieval of relevant precedents proceeds entirely in parallel with other reasoning processes, such as the visual processing involved in understanding a user’s sketched input.
Robustness will be enhanced by making the agents “hot-swappable,” i.e., the logs maintained by the agents in operation will enable another copy to pick up (at a very coarse granularity) where a previous copy left off. This will enable an Agent whose memory is clogging up (or crashes) to be taken off-line, so that its results can be assimilated while another Agent carries on with the task. This scheme will require replicating the knowledge base and case libraries as necessary to minimize communication overhead, and broadcasting working memory state incrementally, using a publish/subscribe model, as well as disk logging. These logs will also be used for adaptation and knowledge reformulation. Just as a dolphin only sleeps with half of its brain at a time, our Companions will use several CPU’s to test proposed changes by “rehearsing” them with logged activities, to evaluate the quality and performance payoffs of proposed learned knowledge and skills.
The first-generation Companion architecture, a subset of the vision above, became operational in October, 2004.
Selected Relevant Papers
Forbus, K. and Hinrichs, T. (2004). Self-modeling in Companion Cognitive Systems: Current Plans. DARPA Workshop on Self-Aware Systems, Washington, DC.
Forbus, K. and Hinrichs, T. (2004). Companion Cognitive Systems: A step towards human-level AI. AAAI Fall Symposium on Achieving Human-level Intelligence through Integrated Systems and Research, October, Washington, DC.