Loading...

A Quick Guide to Model Explainability

Published: January 11, 2024 by Julie Lee

Model explainability has become a hot topic as lenders look for ways to use artificial intelligence (AI) to improve their decision-making. Within credit decisioning, machine learning (ML) models can often outperform traditional models at predicting credit risk.

ML models can also be helpful throughout the customer lifecycle, from marketing and fraud detection to collections optimization. However, without explainability, using ML models may result in unethical and illegal business practices.

What is model explainability? 

Broadly defined, model explainability is the ability to understand and explain a model’s outputs at either a high level (global explainability) or for a specific output (local explainability).1

  • Local vs global explanation: Global explanations attempt to explain the main factors that determine a model’s outputs, such as what causes a credit score to rise or fall. Local explanations attempt to explain specific outputs, such as what leads to a consumer’s credit score being 688. But it’s not an either-or decision — you may need to explain both.

Model explainability can also have varying definitions depending on who asks you to explain a model and how detailed of a definition they require. For example, a model developer may require a different explanation than a regulator.

Model explainability vs interpretability

Some people use model explainability and interpretability interchangeably. But when the two terms are distinguished, model interpretability may refer to how easily a person can understand and explain a model’s decisions.2 We might call a model interpretable if a person can clearly understand:

  • The features or inputs that the model uses to make a decision.
  • The relative importance of the features in determining the outputs.
  • What conditions can lead to specific outputs.

Both explainability and interpretability are important, especially for credit risk models used in credit underwriting. However, we will use model explainability as an overarching term that encompasses an explanation of a model’s outputs and interpretability of its internal workings below.

ML models highlight the need for explainability in finance

Lenders have used credit risk models for decades. Many of these models have a clear set of rules and limited inputs, and they might be described as self-explanatory. These include traditional linear and logistic regression models, scorecards and small decision trees.3

AI analytics solutions, such as ML-powered credit models, have been shown to better predict credit risk. And most financial institutions are increasing their budgets for advanced analytics solutions and see their implementation as a top priority.4 

However, ML models can be more complex than traditional models and they introduce the potential of a “black box.” In short, even if someone knows what goes into and comes out of the model, it’s difficult to explain what’s happening without an in-depth analysis.

Lenders now have to navigate a necessary trade-off. ML-powered models may be more predictive, but regulatory requirements and fair lending goals require lenders to use explainable models.

READ MORE: Explainability: ML and AI in credit decisioning

Why is model explainability required?

Model explainability is necessary for several reasons:

  • To comply with regulatory requirements: Decisions made using ML models need to comply with lending and credit-related, including the Fair Credit Reporting Act (FCRA) and Equal Credit Opportunity Act (ECOA). Lenders may also need to ensure their ML-driven models comply with newer AI-focused regulations, such as the AI Bill of Rights in the U.S. and the E.U. AI Act.
  • To improve long-term credit risk management: Model developers and risk managers may want to understand why decisions are being made to audit, manage and recalibrate models
  • To avoid bias: Model explainability is important for ensuring that lenders aren’t discriminating against groups of consumers.
  • To build trust: Lenders also want to be able to explain to consumers why a decision was made, which is only possible if they understand how the model comes to its conclusions.

There’s a real potential for growth if you can create and deploy explainable ML models. In addition to offering a more predictive output, ML models can incorporate alternative credit data* (also known as expanded FCRA-regulated data) and score more consumers than traditional risk models. As a result, the explainable ML models could increase financial inclusion and allow you to expand your lending universe.

READ MORE: Raising the AI Bar

How can you implement ML model explainability?

Navigating the trade-off and worries about explainability can keep financial institutions from deploying ML models. As of early 2023, only 14 percent of banks and 19 percent of credit unions have deployed ML models. Over a third (35 percent) list explainability of machine learning models as one of the main barriers to adopting ML.5 

Although a cautious approach is understandable and advisable, there are various ways to tackle the explainability problem. One major differentiator is whether you build explainability into the model or try to explain it post hoc—after it’s trained.

Using post hoc explainability

Complex ML models are, by their nature, not self-explanatory. However, several post hoc explainability techniques are model agnostic (they don’t depend on the model being analyzed) and they don’t require model developers to add specific constraints during training.

Shapley Additive Explanations (SHAP) is one used approach. It can help you understand the average marginal contribution features to an output. For instance, how much each feature (input) affected the resulting credit score.

The analysis can be time-consuming and expensive, but it works with black box models even if you only know the inputs and outputs. You can also use the Shapley values for local explanations, and then extrapolate the results for a global explanation.

Other post hoc approaches also might help shine a light into a black box model, including partial dependence plots and local interpretable model-agnostic explanations (LIME).

READ MORE: Getting AI-driven decisioning right in financial services 

Build explainability into model development

Post hoc explainability techniques have limitations and might not be sufficient to address some regulators’ explainability and transparency concerns.6 Alternatively, you can try to build explainability into your models. Although you might give up some predictive power, the approach can be a safer option. 

For instance, you can identify features that could potentially lead to biased outcomes and limit their influence on the model. You can also compare the explainability of various ML-based models to see which may be more or less inherently explainable. For example, gradient boosting machines (GBMs) may be preferable to neural networks for this reason.7

You can also use ML to blend traditional and alternative credit data, which may provide a significant lift — around 60 to 70 percent compared to traditional scorecards — while maintaining explainability.8

READ MORE:Journey of an ML Model 

How Experian can help

As a leader in machine learning and analytics, Experian partners with financial institutions to create, test, validate, deploy and monitor ML-driven models. Learn how you can build explainable ML-powered models using credit bureau, alternative credit, third-party and proprietary data. And monitor all your ML models with a web-based platform that helps you track performance, improve drift and prepare for compliance and audit requests.

*When we refer to “Alternative Credit Data,” this refers to the use of alternative data and its appropriate use in consumer credit lending decisions, as regulated by the Fair Credit Reporting Act. Hence, the term “Expanded FCRA Data” may also apply and can be used interchangeably.

1-3. FinRegLab (2021). The Use of Machine Learning for Credit Underwriting

4. Experian (2022). Explainability: ML and AI in credit decisioning

5. Experian (2023). Finding the Lending Diamonds in the Rough

6. FinRegLab (2021). The Use of Machine Learning for Credit Underwriting

7. Experian (2022). Explainability: ML and AI in credit decisioning

8. Experian (2023). Raising the AI Bar

Related Posts

Fake IDs have been around for decades, but today’s fraudsters aren’t just printing counterfeit driver’s licenses — they’re using artificial intelligence (AI) to create synthetic identities. These AI fake IDs bypass traditional security checks, making it harder for businesses to distinguish real customers from fraudsters. To stay ahead, organizations need to rethink their fraud prevention solutions and invest in advanced tools to stop bad actors before they gain access. The growing threat of AI Fake IDs   AI-generated IDs aren’t just a problem for bars and nightclubs; they’re a serious risk across industries. Fraudsters use AI to generate high-quality fake government-issued IDs, complete with real-looking holograms and barcodes. These fake IDs can be used to commit financial fraud, apply for loans or even launder money. Emerging services like OnlyFake are making AI-generated fake IDs accessible. For $15, users can generate realistic government-issued IDs that can bypass identity verification checks, including Know Your Customer (KYC) processes on major cryptocurrency exchanges.1 Who’s at risk? AI-driven identity fraud is a growing problem for: Financial services – Fraudsters use AI-generated IDs to open bank accounts, apply for loans and commit credit card fraud. Without strong identity verification and fraud detection, banks may unknowingly approve fraudulent applications. E-commerce and retail – Fake accounts enable fraudsters to make unauthorized purchases, exploit return policies and commit chargeback fraud. Businesses relying on outdated identity verification methods are especially vulnerable. Healthcare and insurance – Fraudsters use fake identities to access medical services, prescription drugs or insurance benefits, creating both financial and compliance risks. The rise of synthetic ID fraud Fraudsters don’t just stop at creating fake IDs — they take it a step further by combining real and fake information to create entirely new identities. This is known as synthetic ID fraud, a rapidly growing threat in the digital economy. Unlike traditional identity theft, where a criminal steals an existing person’s information, synthetic identity fraud involves fabricating an identity that has no real-world counterpart. This makes detection more difficult, as there’s no individual to report fraudulent activity. Without strong synthetic fraud detection measures in place, businesses may unknowingly approve loans, credit cards or accounts for these fake identities. The deepfake threat AI-powered fraud isn’t limited to generating fake physical IDs. Fraudsters are also using deepfake technology to impersonate real people. With advanced AI, they can create hyper-realistic photos, videos and voice recordings to bypass facial recognition and biometric verification. For businesses relying on ID document scans and video verification, this can be a serious problem. Fraudsters can: Use AI-generated faces to create entirely fake identities that appear legitimate Manipulate real customer videos to pass live identity checks Clone voices to trick call centers and voice authentication systems As deepfake technology improves, businesses need fraud prevention solutions that go beyond traditional ID verification. AI-powered synthetic fraud detection can analyze biometric inconsistencies, detect signs of image manipulation and flag suspicious behavior. How businesses can combat AI fake ID fraud Stopping AI-powered fraud requires more than just traditional ID checks. Businesses need to upgrade their fraud defenses with identity solutions that use multidimensional data, advanced analytics and machine learning to verify identities in real time. Here’s how: Leverage AI-powered fraud detection – The same AI capabilities that fraudsters use can also be used against them. Identity verification systems powered by machine learning can detect anomalies in ID documents, biometrics and user behavior. Implement robust KYC solutions – KYC protocols help businesses verify customer identities more accurately. Enhanced KYC solutions use multi-layered authentication methods to detect fraudulent applications before they’re approved. Adopt real-time fraud prevention solutions – Businesses should invest in fraud prevention solutions that analyze transaction patterns and device intelligence to flag suspicious activity. Strengthen synthetic identity fraud detection – Detecting synthetic identities requires a combination of behavioral analytics, document verification and cross-industry data matching. Advanced synthetic fraud detection tools can help businesses identify and block synthetic identities. Stay ahead of AI fraudsters AI-generated fake IDs and synthetic identities are evolving, but businesses don’t have to be caught off guard. By investing in identity solutions that leverage AI-driven fraud detection, businesses can protect themselves from costly fraud schemes while ensuring a seamless experience for legitimate customers. At Experian, we combine cutting-edge fraud prevention, KYC and authentication solutions to help businesses detect and prevent AI-generated fake ID and synthetic ID fraud before they cause damage. Our advanced analytics, machine learning models and real-time data insights provide the intelligence businesses need to outsmart fraudsters. Learn more *This article includes content created by an AI language model and is intended to provide general information. 1 https://www.404media.co/inside-the-underground-site-where-ai-neural-networks-churns-out-fake-ids-onlyfake/

Published: March 20, 2025 by Julie Lee

Discover how data analytics in utilities helps energy providers navigate regulatory, economic, and operational challenges. Learn how utility analytics and advanced analytics solutions from Experian can optimize operations and enhance customer engagement.

Published: March 10, 2025 by Stefani Wendel

Romance scams target individuals of all ages and backgrounds. Financial institutions need to protect their customers from these schemes.

Published: February 5, 2025 by Alex Lvoff