NISTIR 8269 (Draft)

A Taxonomy and Terminology of Adversarial Machine Learning

Date Published: October 2019
Comments Due: January 30, 2020 (public comment period is CLOSED)
Email Questions to: ai-nccoe@nist.gov

Planning Note (12/16/2019): The public comment period has been extended until Thursday, January 30, 2020 (the original due date was 12/16/19.)

Author(s)

Elham Tabassi (NIST), Kevin Burns (MITRE), Michael Hadjimichael (MITRE), Andres Molina-Markham (MITRE), Julian Sexton (MITRE)

Announcement

This report is intended as a step toward securing applications of Artificial Intelligence, especially against adversarial manipulations of Machine Learning (ML), by developing a taxonomy and terminology of Adversarial Machine Learning (AML).  Although AI also includes various knowledge-based systems, the data-driven approach of ML introduces additional security challenges in training and testing (inference) phases of system operations. AML is concerned with the design of ML algorithms that can resist security challenges, the study of the capabilities of attackers, and the understanding of attack consequences.

This document develops a taxonomy of concepts and defines terminology in the field of AML. The taxonomy, built on and integrating previous AML survey works, is arranged in a conceptual hierarchy that includes key types of attacks, defenses, and consequences. The terminology, arranged in an alphabetical glossary, defines key terms associated with the security of ML components of an AI system. Taken together, the terminology and taxonomy are intended to inform future standards and best practices for assessing and managing the security of ML components, by establishing a common language and understanding of the rapidly developing AML landscape.

NOTE: A call for patent claims is included on page iv of this draft.  For additional information, see the Information Technology Laboratory (ITL) Patent Policy--Inclusion of Patents in ITL Publications.

Abstract

Keywords

adversarial; artificial intelligence; attack; cybersecurity; defense; evasion; information technology; machine learning; oracle; poisoning
Control Families

None selected

Documentation

Publication:
NISTIR 8269 (Draft) (DOI)
Local Download

Supplemental Material:
Submit Comments (other)
Project Homepage (other)

Document History:
10/30/19: NISTIR 8269 (Draft)

Topics

Security and Privacy
threats

Technologies
artificial intelligence