Loading...

Explainable and Ethical AI: The Case of Fair Lending

Published: March 5, 2020 by Kelly Nguyen

Last week, artificial intelligence (AI) made waves in the news as the Vatican and tech giants signed a statement with a set of guidelines calling for ethical AI. These ethical concerns arose as the usage of artificial intelligence continues to increase in all industries – with the market for AI technology projected to reach $190.61 billion by 2025, according to a report from MarketsandMarkets™.

In the “Rome Call for Ethics,” these new principles require that AI systems must adhere to ethical AI guidelines to protect basic human rights. The doctrine says AI must be developed with a focus on protecting and serving humanity, and that all algorithms should be designed by the principles of transparency, inclusion, responsibility, impartiality, reliability, security and privacy.  In addition, according to the document, organizations must consider the “duty of explanation” and ensure that decisions made as a result of these algorithms are explainable, transparent and fair.

As artificial intelligence becomes increasingly used in many applications and ingrained into our everyday lives (facial recognition, lending decisions, virtual assistants, etc.), establishing new guidelines for ethical AI and its usage has become more critical than ever.

For lenders and financial institutions, AI is poised to shape the future of banking and credit cards. AI is now being used to generate credit insights, reduce risk and make credit more widely available to more credit-worthy consumers.

However, one of the challenges of AI is that these algorithms often can’t explain their reasoning or processes. That’s why AI explainability, or the methods and techniques in AI that make the results of the solution understandable by human experts, remains a large barrier for many institutions when it comes to AI adoption.

The concept of ethical AI goes hand-in-hand with Regulation B of the Equal Opportunity Act (ECOA), which protects consumers from discrimination in any aspect of a credit transaction and requires that consumers receive clear explanations when lenders take adverse action. Adverse action letters, which are intended to inform consumers on why their credit applications were denied, must be transparent and incorporate reasons on why the decision was made – in order to promote fair lending.

While ethical AI has made recent headlines, it’s not a new concept. Last week’s news highlights the need for explainability best practices for financial institutions as well as other organizations and industries. The time is now to implement these guidelines into algorithms and business processes of the present and future.

Join our upcoming webinar as Experian experts dive into fair lending with ethical and explainable AI.

Register now

Related Posts

Scott Brown, Group President at Experian, recently presented at Reuters Next on the power of AI innovation in financial services.

Published: December 13, 2024 by Brian Funicelli

We are squarely in the holiday shopping season. From the flurry of promotional emails to the endless shopping lists,...

Published: November 22, 2024 by Stefani Wendel

Experian's latest GenAI solution empowers organizations to increase productivity, improve data visibility, and scale expertise.

Published: November 22, 2024 by Theresa Nguyen