MIT, Microsoft model identifies AI 'blind spots'

January 28, 2019 //By Rich Pell
MIT, Microsoft model identifies AI 'blind spots'
Researchers at MIT (Cambridge, MA) and Microsoft (Redmond, WA) have developed a model that identifies instances when autonomous systems have "learned" from training examples that don't actually reflect what's happening in the real world.

Such learned actions could result in costly and dangerous errors in real-world situations. The new model, say the researchers, could allow engineers to improve the safety of artificial intelligence (AI) systems - such as driverless vehicles and autonomous robots - by identifying such "blind spots" beforehand.

For example, say the researchers, the AI systems powering driverless cars are trained extensively in virtual simulations to prepare s vehicle for nearly every event on the road. But sometimes the car makes an unexpected error in the real world because an event occurs that should - but doesn't - alter the car's behavior.

An example cited by the researchers considers a driverless car that wasn't trained - and more importantly doesn't have the necessary sensors - to differentiate between distinctly different scenarios such as between large, white cars and ambulances with red, flashing lights on the road. If an ambulance turns on its sirens, the car may not know to slow down and pull over, because it does not perceive the ambulance as different from just another big white car.

To uncover such training blind spots, the researchers' model uses human input to closely monitor a trained system's actions as it acts in the real world, providing feedback when the system makes - or is about to make - a mistake. The original training data is then combined with the human feedback data, and machine-learning techniques are used to produce a model that pinpoints situations where the system most likely needs more information about how to act correctly.

"The model helps autonomous systems better know what they don't know," says Ramya Ramakrishnan, a graduate student in the Computer Science and Artificial Intelligence Laboratory and first author of a paper on the study. "Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.