Intel, Georgia Tech team to mitigate ML deception attacks

April 14, 2020 //By Rich Pell
Intel, Georgia Tech team to mitigate ML deception attacks
Intel has announced that is it joining the Georgia Institute of Technology in a Defense Advanced Research Projects Agency (DARPA) program to mitigate machine learning (ML) deception attacks.

Together, Intel and Georgia Tech have been selected to lead a Guaranteeing Artificial Intelligence (AI) Robustness against Deception ( GARD) program team for DARPA, which aims to develop a new generation of defenses against adversarial deception attacks on ML models and applications. The program is designed to address vulnerabilities inherent in ML platforms, particularly in terms of altering, corrupting, or deceiving these systems, such as tricking the ML used by a self-driving car by making visual alterations to a stop sign.

"Intel and Georgia Tech are working together to advance the ecosystem's collective understanding of and ability to mitigate against AI and ML vulnerabilities," says Jason Martin, principal engineer at Intel Labs and principal investigator for the DARPA GARD program from Intel. "Through innovative research in coherence techniques, we are collaborating on an approach to enhance object detection and to improve the ability for AI and ML to respond to adversarial attacks.”

In the first phase of GARD, Intel and Georgia Tech are enhancing object detection technologies through spatial, temporal, and semantic coherence for both still images and videos. These three defining qualities of object detectors look for contextual clues to determine if a possible anomaly or attack is occurring. While no known real-world attacks have been made on these systems, say Georgia Tech researchers, they first identified security vulnerabilities in object detectors in 2018 with a project known as ShapeShifter.

The ShapeShifter project, say the researchers, exposed adversarial machine learning techniques that were able to mislead object detectors and even erase stop signs from autonomous vehicle detection.

"As ML technologies have developed, researchers used to think that attacking object detectors would be difficult," says Polo Chau, who serves as the lead investigator from Georgia Tech on the GARD program. "ShapeShifter showed us that was not true, they can be affected, and we can attack them in a way to have objects disappear completely or be labeled as

Picture: 
ML deception

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.