Attacks on Traffic Light Recognition
Attacks on Traffic Light Recognition
“Attacks on Traffic Light Recognition” demonstrates practical real-world attacks against neural networks in autonomous driving (AD). By exploiting backdoor and inference time attacks, an adversary can manipulate the perception module’s predictions, resulting in hazardous actions – such as running red lights. While such attacks were previously demonstrated using offline datasets, we are the first to effectively compromise a full-fledged autonomous vehicle in real-world conditions. Our research demonstrates that attacks against camera-based perception in AD are practical. To mitigate these threats, we explore the security of XAI-based defenses and propose anti-backdoor learning techniques.
Design & Development Methods for Secure Automotive Software Systems
Design & Development Methods for Secure Automotive Software Systems
With this demonstrator, we focus on engineering secure software systems using constructive and analytical methods, spanning multiple design/development phases including legal analysis.