Thursday 2
13:30h - 14:20h
Track 4

Defense Against the Dark Arts and Potions: AI Models Can Be Easily Poisoned

Some like to compare AI to magic: We have some data, we insert it in a mysterious black box, and we get a good result. However, as with magic in sci-fi, AI can be used for good or for bad, and many “spells” can be cast with data (our "magic wand"). In many situations, AI models rely on data collected from untrusted sources, such as humans (Muggles), sensors, or IoT devices that can be compromised. These scenarios expose AI models to poisoning attacks, where attackers (Death Eaters and You-Know-Who in our analogy) can manipulate training data to subvert learning (Unforgivable Curses), e.g., to increase the error of the AI model, or to make an agent learn a wrong policy. This is very concerning, since AI models are widely used in mission-critical systems, including medicine and military applications. Poisoning is also seen by industrial practitioners as one of the most feared AI threats. In this talk, I will give an overview and show some examples of how AI models can be exposed to these attacks, explain how attackers can bypass current defenses, and describe some methodologies (Defense Against the Dark Arts and Potions) that can help protect AI models from these attacks.

Javier Carnerero Cano

PhD Researcher in AI Security | Telecom Engineer

Imperial College London

PhD Researcher at Imperial College London. Telecom Engineer at heart. His current research interest is the security of AI, with special focus on data poisoning attacks.

Cybersecurity / Privacy
Artificial Intelligence
Scientific dissemination
Data Science / Data Engineering