Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Other (Initial Public Draft)

An Application of Combinatorial Methods for Explainability in Artificial Intelligence and Machine Learning

Date Published: May 22, 2019
Comments Due: July 3, 2019 (public comment period is CLOSED)
Email Questions to:


Richard Kuhn (NIST), Raghu Kacker (NIST)


This short paper introduces an approach to producing explanations or justifications of decisions made in some artificial intelligence and machine learning (AI/ML) systems, using methods derived from those for fault location in combinatorial testing. We show that validation and explainability issues are closely related to the problem of fault location in combinatorial testing, and that certain methods and tools developed for fault location can also be applied to this problem. This approach is particularly useful in classification problems, where the goal is to determine an object’s membership in a set based on its characteristics. We use a conceptually simple scheme to make it easy to justify classification decisions: identifying combinations of features that are present in members of the identified class but absent or rare in non-members. The method has been implemented in a prototype tool called ComXAI, and examples of its application are given. Examples from a range of application domains are included to show the utility of these methods.



artificial intelligence (AI); assurance of autonomous systems; combinatorial testing; covering array; explainable AI; machine learning
Control Families

None selected


Draft White Paper (pdf)

Supplemental Material:
None available

Document History:
05/22/19: Other (Draft)


Security and Privacy

assurance, testing & validation


artificial intelligence