Artificial Intelligence Project Case Study

Researched and wrote case study based on primary source material and interviews with experts.

You can view the original on the Charles River Analytics web site.

CAMEL

Advancing the emerging field of explainable AI

You’re a small-unit leader defending a strategic fortified position from an approaching enemy. You don’t have the manpower to do it on your own, but an unmanned ground vehicle (UGV) could provide the logistics and reconnaissance support you need. The UGV is controlled by an artificial intelligence (AI) that performs effectively on average, but it can make inexplicable errors that lead to catastrophic results. Without knowing when and whether to trust this potential teammate, you can’t take the risk of relying on it.

Soldier operating UGV (unmanned ground vehicle)

CAMEL (Causal Models to Explain Learning) aims to change this scenario by providing accurate, understandable explanations of AI system decisions. For example, CAMEL can explain how an AI performs classification, such as detecting pedestrians in images, or autonomous decision-making, such as in game environments.

DARPA’s concept for XAI (Image courtesy of DARPA)

The CAMEL Framework

Explanations based on causal models

CAMEL is a novel framework that explains deep RL techniques for data analysis and autonomous systems. It unifies causal modeling with probabilistic programming. Causal models describe how one part of a system influences other parts of the system. The models help describe the real driving forces of system behavior.

CAMEL creates counterfactuals—explanations of the agent’s behavior when it’s not what the user expected. For example, a user asks, “Why did the agent attack?” and CAMEL’s Explanation User Interface answers, “If the health of enemy-294 had been 50% higher, the agent would have retreated.”

When developing CAMEL, Charles River Analytics led a team that included Brown University, the University of Massachusetts at Amherst, and Roth Cognitive Engineering. The four-year contract with DARPA was valued at close to $8 million. The project drew on Charles River’s expertise in machine learning and human factors.

Results

CAMEL approach leads to enhanced user trust

CAMEL user interface

The Charles River project team demonstrated that the CAMEL approach led to enhanced user trust and system acceptance when classifying images of pedestrians. Then, the team took on a more exciting challenge—training RL agents that teamed with humans to play the real-time computer strategy game StarCraft II.

An evaluation of CAMEL supported all major hypotheses, showing that participants:

  • Gave high ratings to CAMEL explanations, finding them usable, useful, and understandable
  • Formed a more accurate mental model of the AI agent
  • Showed improved use of the AI agent, following agent recommendations in situations where the agent performs well, but not in situations where it performs poorly
  • Showed improved task performance, achieving higher scores in StarCraft II