Open source AI toolkit tackles machine learning 'explainability'

August 12, 2019 //By Rich Pell
Open source AI toolkit tackles machine learning 'explainability'
Multinational information technology company IBM (Armonk, NY) has announced a comprehensive open source toolkit of state-of-the-art algorithms that support the interpretability and explainability (or understanding) of machine learning models.

The AI Explainability 360 toolkit is designed to enable machine learning model users to gain insight into the machine’s decision-making process. It has been engineered, says the company, with a common interface for all of the different ways of explaining, and is extensible to accelerate innovation by the community advancing AI explainability.

"To provide explanations in our daily lives, we rely on a rich and expressive vocabulary: we use examples and counterexamples, create rules and prototypes, and highlight important characteristics that are present and absent," says the company in a blog post announcing the toolkit. "When interacting with algorithmic decisions, users will expect and demand the same level of expressiveness from AI."

When it comes to explaining decisions made by algorithms, says the company, there is no single approach that works best. While there are many ways to explain, the appropriate choice depends on the persona of the consumer and the requirements of the machine learning pipeline.

AI Explainability 360 was created with algorithms for case-based reasoning, directly interpretable rules, post hoc local explanations, post hoc global explanations, and more. Given that there are so many different explanation options, says the company, it has created helpful resources in a single place:

  • an interactive experience that provides a gentle introduction through a credit scoring application;
  • several detailed tutorials to educate practitioners on how to inject explainability in other high-stakes applications such as clinical medicine, healthcare management and human resources;
  • documentation that guides the practitioner on choosing an appropriate explanation method.

The company says it is open sourcing the toolkit to help create a community of practice for data scientists, policymakers, and the general public that need to understand how algorithmic decision making affects them. The initial release contains eight algorithms recently created by IBM Research, and also includes metrics from the community that serve as quantitative proxies for the quality of explanations.

IBM Research

Related articles:
IBM AI toolset targets nine


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.