How a Chinese scientist in the US is helping defend AI systems from cyberattacks
- Artificial intelligence can be vulnerable to attacks that are invisible to humans but wreak havoc on computer systems
- Li Bo, an award-winning scientist, leads a team that works with IBM and financial institutions to defend against cyberattacks

During Li Bo’s postdoctoral research three years ago at the University of California, Berkeley, the Chinese scholar and her collaborators from other institutions designed a scenario that could easily fool AI-based autonomous driving systems.
The team pasted custom-made stickers on a stop sign that introduced new patterns, while leaving the original letters visible. Humans could still read the sign perfectly, but the object recognition algorithm picked up something entirely different: a 45-mile speed limit road sign. If this happened in real life, it could have led to severe consequences.
“This experiment has made the public realise how important security is for artificial intelligence,” said Li. “AI is likely to change the fate of human beings, but how safe is it?”

Born and raised in China, Li currently serves as an associate professor at the University of Illinois at Urbana-Champaign, where she is at the forefront of US research on so-called adversarial learning. This field pits AI systems against each other based on game theory, which helps in the development of groundbreaking methodologies to improve the robustness of AI.