AIMultiple ResearchAIMultiple Research

Explainable AI (XAI) in 2024: Guide to enterprise-ready AI

Cem Dilmegani
Updated on Jan 11
5 min read

According to PwC’s report on XAI, AI has a $15.7 trillion of opportunity by 2030. However, as AI tools become more advanced, more computations are done in a “black box” that humans can hardly comprehend. This lack of explainability is unable to satisfy the need for transparency, trust, and a good understanding of expected business outcomes. Explainability is key to enterprise adoption of AI because people don’t easily trust a machine’s recommendations that they don’t thoroughly understand.

Explainable AI (XAI) comes in to solve this black box problem. It explains how models draw specific conclusions and what the strengths and weaknesses of the algorithm are. XAI widens the interpretability of AI models and helps humans to understand the reasons for their decisions.

What is XAI?

XAI is the ability of algorithms to explain why they come up with specific results. While AI-based algorithms help humans to make better decisions in their businesses, humans may not always understand how AI reached that conclusion. XAI aims to explain how these algorithms reach these conclusions and what factors affect the conclusion. In Wikipedia, XAI is defined as:

Explainable AI (XAI) refers to methods and techniques in the application of artificial intelligence technology (AI) such that human experts can understand the results of the solution. It contrasts with the concept of the “black box” in machine learning, where even their designers cannot explain why the AI arrived at a specific decision.

Why is it relevant now?

Shortly, XAI creates a transparent environment where users can understand and trust AI-made decisions.

Businesses use AI tools to improve their performance and make better decisions. However, besides benefiting from the output of these tools, understanding how they work is also vital. The lack of explainability prevents companies from making relevant “what-if” scenarios and creating trust issues because they don’t understand how AI reaches a specific result.

Gartner states that the lack of explainability is not a new problem. However, AI tools become more sophisticated to deliver better results in businesses, and this problem draws more attention now. These more complex AI tools are performed in a “black box,” where it is hard to interpret the reasons behind their decisions.

While humans can explain simpler AI models like decision trees or logistic regression, more accurate models like neural networks or random forests are black-box models. The black box problem is one of the main challenges of machine learning algorithms. These AI-powered algorithms come up with specific decisions, but it is hard to interpret the reasons behind these decisions.

XAI is relevant now because it explains to us the black box AI models and helps humans perceive how AI models work. Besides the reasoning of specific decisions, XAI can explain different cases to reach different conclusions and strengths/weaknesses of the model. As businesses understand AI models better and how their problems are solved, XAI builds trust between corporates and AI. As a result, this technology helps companies to use AI models to their full potential.

How does it work?

Source: DARPA

Today’s AI technology delivers a decision or recommendation for businesses by benefiting from different models. However, users can’t easily perceive how the results are achieved or why the model didn’t deliver a different result. Besides delivering accurate and specific results, XAI uses an explainable model with an explanation interface to help users understand for the model works. We can categorize XAI under two types:

Explainable models like Decision Trees or Naive Bayes

These models are simple and quickly implementable. The algorithms consist of simple calculations that can even be done by humans themselves. Thus, these models are explainable, and humans can easily understand how these models arrive at a specific decision. To observe the reasons behind that specific decision, users can quickly analyze the algorithm to find the effects of different inputs.

Explaining black box models

Explaining more complex models like Artificial Neural Networks (ANNs) or random forests is more difficult. They are also referred to as black box models due to their complexity and the difficulty of understanding the relations between their inputs and predictions.

Businesses use these models more commonly because they are better performing than explainable models in most commercial applications. To explain these models, XAI approaches involve building an explanation interface that has data visualization and scenario analysis features. This interface makes these models more easily understandable by humans. Features of such an interface include:

Visual analysis

Source: Google Cloud

XAI interfaces visualize outputs of different data points to explain the relationships between specific features and the model predictions. In the above example, users can observe the X and Y values of different data points and understand their impact on the inference of absolute error from the color code. In this specific image, the feature represented in the X axis seems to determine the outcome more strongly than the feature represented in the Y axis.

Scenario analysis

Source: Fiddler Labs

XAI analyzes each feature and analyzes its effect on the outcome. With this analysis, users can create new scenarios and understand how changing input values affect the output. In the above example, users can see the factors that positively/negatively affect the predicted loan risk.

What are its advantages?

Rather than improving performance, XAI aims to explain how specific decisions or recommendations are reached. It helps humans how/why AI behaves in certain ways and builds trust between humans and AI models. The main advantages of XAI are:

  • Improved explainability and transparency: Businesses can understand sophisticated AI models better and perceive why they behave in certain ways under specific conditions. Even if it is a black-box model, humans can use an explanation interface to understand how these AI models achieve certain conclusions.
  • Faster adoption: As businesses can understand AI models better, they can trust them in more important decisions 
  • Improved debugging: When the system works unexpectedly, XAI can be used to identify problems and help developers to debug the issue.
  • Enabling auditing for regulatory requirements

How does XAI serve AI ethics?

While AI becomes more integrated into our lives, the importance of AI ethics increases. However, the complexity of advanced AI models and lack of transparency create doubts about these models. Without understanding them, humans can’t decide if these AI models are socially beneficial, trustworthy, safe, and fair. Thus, AI models need to follow specific ethical guidelines. Gartner gathers AI ethics under five main components:

  • Explainability and transparent
  • Human-centric and socially beneficial
  • Fair
  • Secure and safe
  • Accountable

One of the primary purposes of XAI is to help AI models to serve these five components. Humans need to have a deep understanding of AI models to understand if they follow these components. Humans can’t trust an AI model that they don’t know how it works. By understanding how these models work, humans can decide if AI models follow all of these five characteristics. 

What are the companies providing XAI?

Most XAI vendors present different explanation interfaces to clarify complex AI models. Some example vendors include:

  • Google Cloud Platform: Google Cloud’s XAI platform uses your ML models to score each factor to see how each of them contributes to the final result of predictions. It also manipulates data to create scenario analyses 
  • Flowcast: This API based solution aims to unveil black box models by being integrated into different company systems. Flowcast creates models to clarify the relationships between input and output values of different models. To achieve that, Flowcast relies on transfer learning and continuous improvement.
  • Fiddler Labs: This US startup provides users with different charts to explain AI models. These charts include similarity levels between different scenarios. By using available data, Fiddler analyzes the impact of each feature and creates different what-if scenarios.

Here is a list of more AI-related articles you might be interested in:

If you have questions on Explainable AI (XAI), feel free to contact us:

Find the Right Vendors
Access Cem's 2 decades of B2B tech experience as a tech consultant, enterprise leader, startup entrepreneur & industry analyst. Leverage insights informing top Fortune 500 every month.
Cem Dilmegani
Principal Analyst
Follow on

Cem Dilmegani
Principal Analyst

Cem has been the principal analyst at AIMultiple since 2017. AIMultiple informs hundreds of thousands of businesses (as per similarWeb) including 60% of Fortune 500 every month.

Cem's work has been cited by leading global publications including Business Insider, Forbes, Washington Post, global firms like Deloitte, HPE, NGOs like World Economic Forum and supranational organizations like European Commission. You can see more reputable companies and media that referenced AIMultiple.

Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised businesses on their enterprise software, automation, cloud, AI / ML and other technology related decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization.

He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch and Business Insider.

Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.

To stay up-to-date on B2B tech & accelerate your enterprise:

Follow on

Next to Read

Comments

Your email address will not be published. All fields are required.

0 Comments