Computer Security Resource Center

Computer Security Resource Center

Computer Security
Resource Center

Automated Combinatorial Testing for Software

Explainable Artificial Intelligence and Autonomous Systems

Combinatorial methods make possible an approach to producing explanations or justifications of decisions in artificial intelligence and machine learning (AI/ML) systems. This approach is particularly useful in classification problems, where the goal is to determine an object’s membership in a set based on its characteristics. We use a conceptually simple scheme to make it easy to justify classification decisions: identifying combinations of features that are present in members of the identified class but absent or rare in non-members. The method has been implemented in a prototype tool called ComXAI, which we are currently applying to machine learning problems.  Explainability is key in both using and assuring safety and reliability for autonomous systems and other applications of AI and machine learning. 

R. Kuhn, R. Kacker, An Application of Combinatorial Methods for Explainability in Artificial Intelligence and Machine Learning. NIST Whitepaper, May 22, 2019. 

Related: DR Kuhn, D Yaga, R Kacker, Y Lei, V Hu, Pseudo-Exhaustive Verification of Rule Based Systems, 30th Intl Conf on Software Engineering and Knowledge Engineering, July 2018.


Accuracy-explainability tradeoff in ML



Fault location


ML features


Feature classification


Feature combinations


Summary, combinatorial methods and explainable AI

Created May 24, 2016, Updated August 20, 2019