Computer Security Resource Center

Computer Security Resource Center

Computer Security
Resource Center

White Paper (Draft)

An Application of Combinatorial Methods for Explainability in Artificial Intelligence and Machine Learning

Date Published: May 22, 2019
Comments Due: July 3, 2019 (public comment period is CLOSED)
Email Questions to: xai@nist.gov

Author(s)

Richard Kuhn (NIST), Raghu Kacker (NIST)

Announcement

This short paper introduces an approach to producing explanations or justifications of decisions made in some artificial intelligence and machine learning (AI/ML) systems, using methods derived from those for fault location in combinatorial testing. We show that validation and explainability issues are closely related to the problem of fault location in combinatorial testing, and that certain methods and tools developed for fault location can also be applied to this problem. This approach is particularly useful in classification problems, where the goal is to determine an object’s membership in a set based on its characteristics. We use a conceptually simple scheme to make it easy to justify classification decisions: identifying combinations of features that are present in members of the identified class but absent or rare in non-members. The method has been implemented in a prototype tool called ComXAI, and examples of its application are given. Examples from a range of application domains are included to show the utility of these methods.

Abstract

Keywords

artificial intelligence (AI); assurance of autonomous systems; combinatorial testing; covering array; explainable AI; machine learning
Control Families

None selected

Documentation

Publication:
Draft White Paper

Supplemental Material:
None available

Topics

Security and Privacy
assurance; testing & validation