STEM

Security Risks in AI and Machine Learning: Categorizing Attacks and Failure Modes

Like any software or process, machine learning (ML) is vulnerable to attack. In order to protect something, you must first understand where and how a system is vulnerable. In this course, Diana Kelley shows experienced threat modelers the ways that ML shifts the focus based on potential impact and from the vast amount of data that ML systems need to fuel their operation. Diana shows how ML can fail in a number of ways when under attack from adversaries and how design flaws can also lead to operational failure, data leakage, and other security and privacy risks.

Learn the importance of building resilient ML, the impacts of failure to build security into ML, and where and how ML is vulnerable from intentional adversaries and from design and implementation issues. Plus, discover some of the most effective approaches and techniques for building robust and resilient ML.

Learn More