Published: October 25, 2022
Citation: Computer (IEEE Computer) vol. 85, no. 11, (November 2022) pp. 94-99
Author(s)
Alina Oprea (Northeastern University), Anoop Singhal (NIST), Apostol Vassilev (NIST)
Many practical applications benefit from Machine Learning (ML) and Artificial Intelligence (AI) technologies, but their security needs to be studied in more depth before the methods and algorithms are actually deployed in critical settings. In this article, we discuss the risk of poisoning attacks when training machine learning models and discuss challenges for defending against this threat.
Many practical applications benefit from Machine Learning (ML) and Artificial Intelligence (AI) technologies, but their security needs to be studied in more depth before the methods and algorithms are actually deployed in critical settings. In this article, we discuss the risk of poisoning attacks...
See full abstract
Many practical applications benefit from Machine Learning (ML) and Artificial Intelligence (AI) technologies, but their security needs to be studied in more depth before the methods and algorithms are actually deployed in critical settings. In this article, we discuss the risk of poisoning attacks when training machine learning models and discuss challenges for defending against this threat.
Hide full abstract
Keywords
artificial intelligence technologies; machine learning trustworthiness; poisoning attacks; security
Control Families
None selected