Loading...

A Quick Guide to Model Explainability

Published: January 11, 2024 by Julie Lee

Model explainability has become a hot topic as lenders look for ways to use artificial intelligence (AI) to improve their decision-making. Within credit decisioning, machine learning (ML) models can often outperform traditional models at predicting credit risk.

ML models can also be helpful throughout the customer lifecycle, from marketing and fraud detection to collections optimization. However, without explainability, using ML models may result in unethical and illegal business practices.

What is model explainability? 

Broadly defined, model explainability is the ability to understand and explain a model’s outputs at either a high level (global explainability) or for a specific output (local explainability).1

  • Local vs global explanation: Global explanations attempt to explain the main factors that determine a model’s outputs, such as what causes a credit score to rise or fall. Local explanations attempt to explain specific outputs, such as what leads to a consumer’s credit score being 688. But it’s not an either-or decision — you may need to explain both.

Model explainability can also have varying definitions depending on who asks you to explain a model and how detailed of a definition they require. For example, a model developer may require a different explanation than a regulator.

Model explainability vs interpretability

Some people use model explainability and interpretability interchangeably. But when the two terms are distinguished, model interpretability may refer to how easily a person can understand and explain a model’s decisions.2 We might call a model interpretable if a person can clearly understand:

  • The features or inputs that the model uses to make a decision.
  • The relative importance of the features in determining the outputs.
  • What conditions can lead to specific outputs.

Both explainability and interpretability are important, especially for credit risk models used in credit underwriting. However, we will use model explainability as an overarching term that encompasses an explanation of a model’s outputs and interpretability of its internal workings below.

ML models highlight the need for explainability in finance

Lenders have used credit risk models for decades. Many of these models have a clear set of rules and limited inputs, and they might be described as self-explanatory. These include traditional linear and logistic regression models, scorecards and small decision trees.3

AI analytics solutions, such as ML-powered credit models, have been shown to better predict credit risk. And most financial institutions are increasing their budgets for advanced analytics solutions and see their implementation as a top priority.4 

However, ML models can be more complex than traditional models and they introduce the potential of a “black box.” In short, even if someone knows what goes into and comes out of the model, it’s difficult to explain what’s happening without an in-depth analysis.

Lenders now have to navigate a necessary trade-off. ML-powered models may be more predictive, but regulatory requirements and fair lending goals require lenders to use explainable models.

READ MORE: Explainability: ML and AI in credit decisioning

Why is model explainability required?

Model explainability is necessary for several reasons:

  • To comply with regulatory requirements: Decisions made using ML models need to comply with lending and credit-related, including the Fair Credit Reporting Act (FCRA) and Equal Credit Opportunity Act (ECOA). Lenders may also need to ensure their ML-driven models comply with newer AI-focused regulations, such as the AI Bill of Rights in the U.S. and the E.U. AI Act.
  • To improve long-term credit risk management: Model developers and risk managers may want to understand why decisions are being made to audit, manage and recalibrate models
  • To avoid bias: Model explainability is important for ensuring that lenders aren’t discriminating against groups of consumers.
  • To build trust: Lenders also want to be able to explain to consumers why a decision was made, which is only possible if they understand how the model comes to its conclusions.

There’s a real potential for growth if you can create and deploy explainable ML models. In addition to offering a more predictive output, ML models can incorporate alternative credit data* (also known as expanded FCRA-regulated data) and score more consumers than traditional risk models. As a result, the explainable ML models could increase financial inclusion and allow you to expand your lending universe.

READ MORE: Raising the AI Bar

How can you implement ML model explainability?

Navigating the trade-off and worries about explainability can keep financial institutions from deploying ML models. As of early 2023, only 14 percent of banks and 19 percent of credit unions have deployed ML models. Over a third (35 percent) list explainability of machine learning models as one of the main barriers to adopting ML.5 

Although a cautious approach is understandable and advisable, there are various ways to tackle the explainability problem. One major differentiator is whether you build explainability into the model or try to explain it post hoc—after it’s trained.

Using post hoc explainability

Complex ML models are, by their nature, not self-explanatory. However, several post hoc explainability techniques are model agnostic (they don’t depend on the model being analyzed) and they don’t require model developers to add specific constraints during training.

Shapley Additive Explanations (SHAP) is one used approach. It can help you understand the average marginal contribution features to an output. For instance, how much each feature (input) affected the resulting credit score.

The analysis can be time-consuming and expensive, but it works with black box models even if you only know the inputs and outputs. You can also use the Shapley values for local explanations, and then extrapolate the results for a global explanation.

Other post hoc approaches also might help shine a light into a black box model, including partial dependence plots and local interpretable model-agnostic explanations (LIME).

READ MORE: Getting AI-driven decisioning right in financial services 

Build explainability into model development

Post hoc explainability techniques have limitations and might not be sufficient to address some regulators’ explainability and transparency concerns.6 Alternatively, you can try to build explainability into your models. Although you might give up some predictive power, the approach can be a safer option. 

For instance, you can identify features that could potentially lead to biased outcomes and limit their influence on the model. You can also compare the explainability of various ML-based models to see which may be more or less inherently explainable. For example, gradient boosting machines (GBMs) may be preferable to neural networks for this reason.7

You can also use ML to blend traditional and alternative credit data, which may provide a significant lift — around 60 to 70 percent compared to traditional scorecards — while maintaining explainability.8

READ MORE:Journey of an ML Model 

How Experian can help

As a leader in machine learning and analytics, Experian partners with financial institutions to create, test, validate, deploy and monitor ML-driven models. Learn how you can build explainable ML-powered models using credit bureau, alternative credit, third-party and proprietary data. And monitor all your ML models with a web-based platform that helps you track performance, improve drift and prepare for compliance and audit requests.

*When we refer to “Alternative Credit Data,” this refers to the use of alternative data and its appropriate use in consumer credit lending decisions, as regulated by the Fair Credit Reporting Act. Hence, the term “Expanded FCRA Data” may also apply and can be used interchangeably.

1-3. FinRegLab (2021). The Use of Machine Learning for Credit Underwriting

4. Experian (2022). Explainability: ML and AI in credit decisioning

5. Experian (2023). Finding the Lending Diamonds in the Rough

6. FinRegLab (2021). The Use of Machine Learning for Credit Underwriting

7. Experian (2022). Explainability: ML and AI in credit decisioning

8. Experian (2023). Raising the AI Bar

Related Posts

In today’s digital lending landscape, fraudsters are more sophisticated, coordinated, and relentless than ever. For companies like Terrace Finance — a specialty finance platform connecting over 5,000 merchants, consumers, and lenders — effectively staying ahead of these threats is a major competitive advantage. That is why Terrace Finance partnered with NeuroID, a part of Experian, to bring behavioral analytics into their fraud prevention strategy. It has given Terrace’s team a proactive, real-time defense that is transforming how they detect and respond to attacks — potentially stopping fraud before it ever reaches their lending partners. The challenge: Sophisticated fraud in a high-stakes ecosystem Terrace Finance operates in a complex environment, offering financing across a wide range of industries and credit profiles. With applications flowing in from countless channels, the risk of fraud is ever-present. A single fraudulent transaction can damage lender relationships or even cut off financing access for entire merchant groups. According to CEO Andy Hopkins, protecting its partners is a top priority for Terrace:“We know that each individual fraud attack can be very costly for merchants, and some merchants will get shut off from their lending partners because fraud was let through ... It is necessary in this business to keep fraud at a tolerable level, with the ultimate goal to eliminate it entirely.” Prior to NeuroID, Terrace was confident in its ability to validate submitted data. But with concerns about GenAI-powered fraud growing, including the threat of next-generation fraud bots, Terrace sought out a solution that could provide visibility into how data was being entered and detect risk before applications are submitted. The solution: Behavioral analytics from NeuroID via Experian After integrating NeuroID through Experian’s orchestration platform, Terrace gained access to real-time behavioral signals that detected fraud before data was even submitted. Just hours after Terrace turned NeuroID on, behavioral signals revealed a major attack in progress — NeuroID enabled Terrace to respond faster than ever and reduce risk immediately. “Going live was my most nerve-wracking day. We knew we would see data that we have never seen before and sure enough, we were right in the middle of an attack,” Hopkins said. “We thought the fraud was a little more generic and a little more spread out. What we found was much more coordinated activities, but this also meant we could bring more surgical solutions to the problem instead of broad strokes.” Terrace has seen significant results with NeuroID in place, including: Together, NeuroID and Experian enabled Terrace to build a layered, intelligent fraud defense that adapts in real time. A partnership built on innovation Terrace Finance’s success is a testament to what is  possible when forward-thinking companies partner with innovative technology providers. With Experian’s fraud analytics and NeuroID’s behavioral intelligence, they have built a fraud prevention strategy that is proactive, precise, and scalable. And they are not stopping there. Terrace is now working with Experian to explore additional tools and insights across the ecosystem, continuing to refine their fraud defenses and deliver the best possible experience for genuine users. “We use the analogy of a stream,” Hopkins explained. “Rocks block the flow, and as you remove them, it flows better. But that means smaller rocks are now exposed. We can repeat these improvements until the water flows smoothly.” Learn more about Terrace Finance and NeuroID Want more of the story? Read the full case study to explore how behavioral analytics provided immediate and long-term value to Terrace Finance’s innovative fraud prevention strategy. Read case study

Published: September 3, 2025 by Allison Lemaster

Financial institutions can unlock value through analytics to gain insights that drive smarter decisions and better business results.

Published: July 24, 2025 by Brian Funicelli

With increasing regulatory complexities, compliance with model risk management requirements is crucial for operational resilience.

Published: June 23, 2025 by Masood Akhtar