Analogical Problem-Solving Tools

A critical resource in decision-making is the ability to draw upon experience: to detect problems, reason about their possible impact, and make appropriate plans by analogy with previous similar situations. Case-based reasoning (CBR) attempts to approximate this ability. While today's CBR technology can be very useful, it falls far short of the flexibility of human analogical reasoning. Most indexing schemes are generated by hand, and little research has been done on adaptation. Thus CBR must evolve substantially before it can approach the potential of mechanizing human analogical problem solving abilities.

The limitations of today's CBR arise from two sources. First, most CBR systems have little formally encoded knowledge in their cases, using instead human-understandable media such as text and video. Such information cannot be automatically processed to generate advice specific to the situation at hand. The formal representations used are often simply sets of features, essentially unary predicates, without relational information. This is a serious limitation. Sets of features cannot capture argument structure. Dependency graphs, logical proofs, and causal connections are three important kinds of information that simply cannot be represented solely by unary predicates. Also, feature-based representations cannot capture human similarity judgments. To accurately evaluate the plausibility of a surmise, a CBR system must estimate the similarity of the retrieved experience to the current situation. This similarity judgment must coincide with that of human partners for the system's advice to be believable. Most CBR work draws inspiration from earlier psychological models, like those of Tversky, which were feature-based. Unfortunately, there is now considerable evidence suggesting that such models cannot account for human similarity judgments. More recent psychological models, based on deeper computational insights, highlight the importance of comparison and relational knowledge even in what was previously considered low-level, perceptual similarity judgments.

The second source of limitations for today's CBR systems comes from the lack of robust, domain-independent algorithms for matching and adaptation. Most CBR matchers do not process relational information. Those which do handle relational information typically rely on domain-specific and/or task-specific constraints. There is ample psychological evidence suggesting that human similarity judgments rely on the same mechanisms used for analogical reasoning, and that these mechanisms are domain-independent. These mechanisms can be guided by task constraints, but they can also provide reasonable results without them. The HPKB effort could potentially revolutionize CBR practice by facilitating the construction of libraries whose cases contained formally represented relational information. However, exploiting this potential requires analogical processing techniques that capture the power and flexibility of human analogical problem solving. We believe that the analogical processing tools we have been developing, in close collaboration with psychologists, provide the best approximation to human abilities available today. We propose to forge these tools into an efficient technology that can be widely used in performance systems. Next we outline these tools and describe how we will accomplish this.

SME: Given two descriptions, SME produces a mapping describing how they might match, including plausible inferences that follow from the correspondences. SME is based on Gentner's structure-mapping theory of analogy and similarity. This theory explains analogy and similarity in terms of comparisons involving structured representations, rather than just lists of features. There is already a large body of evidence verifying the predictions of structure-mapping for analogical reasoning, and its role in similarity judgments and decision making is also being explored by psychologists. SME has been used successfully to model a variety of psychological data, and in for modeling learning in physical domains, a model of concept formation, a system to generate natural language explanations involving analogy and a problem solver. It has been used by a variety of groups worldwide for their own research, and features of SME are now showing up in other matchers.

Viewed as a performance system, SME has three advantages.

  1. It avoids the combinatorics associated with general graph matching. It uses efficient local tests to generate a limited set of local hypotheses, and combines these hypotheses into mappings by using a greedy merge process.
  2. SME operates incrementally. Matches can be updated as new information arrives without starting over from scratch. This ability is critical for efficient analogical processing in situations where the available information is rapidly changing.
  3. SME generates psychologically plausible candidate inferences, the surmises that a person might draw by comparing those two descriptions. Many matchers do not generate candidate inferences, producing only a numerical rating or a set of correspondences. Generating psychologically plausible inferences is important in a performance context so that a program's suggestions are viewed as acceptable to human partners.

MAC/FAC: Rapid retrieval of relevant information from memories containing thousands or even millions of rich, structured descriptions is a daunting problem. Any structural matcher, no matter how efficient, imposes too high a computational burden. Yet the desirable matches, whose candidate inferences can provide advice relevant to the current situation, are those which only a structural matcher will find. MAC/FAC is based on two insights:

  1. Retrieval can be factored into two stages: An initial, wide-net stage (MAC) that filters a huge pool of candidates using a cheap nonstructural match, which produces one (or a handful) of candidates, followed by a structural matching stage (FAC, i.e., SME) that produces the desired advice.
  2. A special kind of vector, a content vector, can be automatically generated from structured representations such that the dot product of two content vectors provides an estimate of the quality of a structural match between the corresponding structural descriptions.

These insights mean that the first stage of retrieval can be implemented by finding a maximal match between the content vector of a probe and memory, followed by the second stage using SME to provide a more expensive, but accurate, structural match between the probe and a few of the best MAC candidates. The MAC stage is highly parallel, and is reasonably efficient even on serial hardware.

MAC/FAC accurately simulates several psychological phenomena in similarity-based retrieval, and provides more robust results than other simulations attempting to capture the same phenomena. We believe this robustness is critical for providing human-like flexible retrieval with large case libraries.

To turn these tools into the substrate for a new generation of CBR tools, we plan to:

  1. Rework the software to provide scale-up and integration.
  2. Experiment with hardware accelerators for the MAC/FAC retrieval algorithm.
  3. Develop automatic techniques for generating new representations for facilitating expert-like retrieval.

Scaling up and integration: While both SME and MAC/FAC are already quite robust, they will need additional development to handle the challenges of this project. SME, for instance, is routinely used with descriptions ranging from 10 to 400 statements, with the average being roughly 70. MAC/FAC has been used with memories of up to 125 descriptions. The predicate vocabularies used in most these experiments have been slightly in excess of 1,000 predicates. While adequate for our initial cognitive simulations, we believe the demands of knowledge-rich CBR will be more extensive.

We plan to rework these systems so that they can handle descriptions involving up to 1,000 statements, over memories that contain 1,000 or more descriptions of that size, with ontologies involving 10,000 or more predicates. This will mostly involve recoding; our current software is deliberately optimized for flexibility over performance. Given their initial successes, it is now worth reimplementing them with performance as the primary constraint. We will stay with Common Lisp as much as possible to enhance productivity, but to gain the desired efficiency on stock hardware may require C/C++.

Currently the descriptions used by SME and MAC/FAC are either encoded in their native format or automatically translated into this format. The ability to interface them to other knowledge representation formats without translation would substantially enhance their utility. We are creating a generic protocol (using CLOS) to allow these tools to automatically link into other representation systems.

Hardware accelerators for retrieval: As the size of case libraries grows, hardware acceleration is likely to become necessary. We will explore inexpensive stock hardware methods for accelerating MAC/FAC. For instance, the simplicity of the MAC retrieval stage provides an interesting opportunity to exploit relatively inexpensive digital signal processing hardware to speed up retrieval. Another alternative is to use symmetric multiprocessors, which are now appearing in relatively inexpensive workstations.

Interactive evolution of representation vocabularies for expert-like retrieval: The goal of a CBR retrieval system is to approximate the performance of a human domain expert, rather than a novice. Psychological data suggests that better encoding leads to better retrieval, and indeed the CBR community builds special-purpose representations to facilitate retrieval. However, constructing such vocabularies by hand is difficult, and will become impossible as the size of the case libraries grow. We are working on methods for automatically augmenting representation vocabularies, so that domain experts who are not familiar with knowledge representation techniques can train CBR systems to improve retrieval.

Back to HPKB page | Back to QRG Home Page