MITB Banner

Now Red Hat Is Giving Away This For Free

Share

Have you ever used a machine learning algorithm and got confused by its predictions? How did it make this decision? How do we ensure trust in these systems? To answer these questions, recently, a team of researchers at Red Hat introduced a new library known as TrustyAI.

TrustyAI looks into explainable artificial intelligence (XAI) solutions to address the trustworthiness in machine learning and decision services landscapes. This library helps in increasing the trust in decision-making processes that depend on AI predictive models. 

Why this research

Automation of decisions is crucial to deal with complex business processes that can respond to changes in business conditions and scenarios. The researchers stated, “The orchestration and automation of decision services is one of the key aspects in handling such business processes. Decision services can leverage different kinds of predictive models underneath, from rule-based systems to decision trees or machine learning-based approaches.”

“One important aspect is the trustworthiness of such decision services, especially when automated decisions might impact human lives. For this reason, it is important to be able to explain decision services,” the researchers added. This is the reason why the researchers created this XAI library. The library leverages different explainability techniques for explaining decision services and black-box AI systems.

Tech behind

TrustyAI Explainability Toolkit is an open-source XAI library that offers value-added services to a business automation solution. It combines machine learning models and decision logic to enrich automated decisions by including predictive analytics. In particular, the TrustyAI Explainability Toolkit leverages three explainability techniques for black-box AI systems, which are 

  • LIME: Local Interpretable Model-agnostic Explanations (LIME) is one of the most widely used approaches for explaining a prediction generated by a black-box model.
  • SHAP: SHAP or SHapley Additive exPlanations is a popular open-source library that works well with any machine learning or deep learning model.
  • Counterfactual explanation: Counterfactual explanation is an essential approach in providing transparency and explainability to the result of predictive models

The researchers investigated the techniques mentioned above, benchmarking both LIME and counterfactual methods against existing implementations. For benchmarking, they introduced three explainability algorithms, TrustyAI-LIME, TrustyAI-SHAP and the TrustyAI counterfactual search.

Contributions of this research

The important contributions made by the researchers are-

  • TrustyAI Explainability Toolkit is the first comprehensive set of tools for explainability AI that works well in the decision service domain.
  • This research is an extended approach for generating Local Interpretable model-agnostic explanations, especially built for decision services.
  • The research showed a counterfactual explanation generation approach based on constraint problem solver.
  • An extended version of SHAP that enables background data identification and includes error bounds while generating confidence scores.
  • In terms of sparsity, TrustyAI manages to fully satisfy the requirement of changing the least amount of features as possible.

Wrapping up

According to the researchers, local explanations generated with TrustyAI-LIME are more effective than LIME reference implementation. It does not require training data to accurately sample and encode perturbed samples, making it fit better in the decision service scenario.

The planned extensions to SHAP within TrustyAI-SHAP have the potential to greatly improve diagnostic ability when designing explainers. TrustyAI-SHAP aims to address feature attributions by providing accuracy metrics and confidence intervals. Lastly, the TrustyAI counterfactual search achieved good performance relative to the Alibi baseline. TrustyAI requires significantly less time to retrieve a valid counterfactual.

PS: The story was written using a keyboard.
Share
Picture of Ambika Choudhury

Ambika Choudhury

A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India