Credit Lending

Loading...

By: Wendy Greenawalt This blog kicks off a three part series exploring some common myths regarding credit attributes. Since Experian has relationships with thousands of organizations spanning multiple industries, we often get asked the same types of questions from clients of all sizes and industries. One of the questions we hear frequently from our clients is that they already have credit attributes in place, so there is little to no benefit in implementing a new attribute set. Our response is that while existing credit attributes may continue to be predictive, changes to the type of data available from the credit bureaus can provide benefits when evaluating consumer behavior. To illustrate this point, let’s discuss a common problem that most lenders are facing today-- collections. Delinquency and charge-off continue to increase and many organizations are having difficulty trying to determine the appropriate action to take on an account because consumer behavior has drastically changed regarding credit attributes. New codes and fields are now reported to the credit bureaus and can be effectively used to improve collection-related activities. Specifically, attributes can now be created to help identify consumers who are rebounding from previous account delinquencies. In addition, lenders can evaluate the number and outstanding balances of collection or other types of trades.  This can be achieved while considering the percentage of accounts that are delinquent and the specific type of accounts affected after assessing credit risk. The utilization of this type of data helps an organization to make collection decisions based on very granular account data.  This is done while considering new consumer trends such as strategic defaulters. Understanding all of the consumer variables will enable an organization to decide if the account should be allowed to self-cure.  If so, immediate action should be taken or modification of account terms should be contemplated. Incorporating new data sources and updating attributes on a regular basis allows lenders to react to market trends quickly by proactively managing strategies.  

Published: October 20, 2009 by Guest Contributor

When reviewing offers for prospective clients, lenders often deal with a significant amount of missing information in assessing the outcomes of lending decisions, such as: Why did a consumer accept an offer with a competitor? What were the differentiating factors between other offers and my offer, i.e. what were their credit score trends? What happened to consumers that we declined? Do they perform as expected or better than anticipated? What were their credit risk models? While lenders can easily understand the implications of the loans they have offered and booked with consumers, they often have little information about two important groups of consumers: 1. Lost leads: consumers to whom they made an offer but did not book 2. Proxy performance: consumers to whom financing was not offered, but where the consumer found financing elsewhere. Performing a lost lead analysis on the applications approved and declined, can provide considerable insight into the outcomes and credit performance of consumers that were not added to the lender’s portfolio. Lost lead analysis can also help answer key questions for each of these groups: How many of these consumers accepted credit elsewhere? What were their credit attributes? What are the credit characteristics of the consumers we're not booking? Were these loans booked by one of my peers or another type of lender? What were the terms and conditions of these offers? What was the performance of the loans booked elsewhere? Who did they choose for loan origination? Within each of these groups, further analysis can be conducted to provide lenders with actionable feedback on the implications of their lending policies, possibly identifying opportunities for changes to better fulfill lending objectives. Some key questions can be answered with this information: Are competitors offering longer repayment terms? Are peers offering lower interest rates to the same consumers? Are peers accepting lower scoring consumers to increase market share? The results of a lost lead analysis can either confirm that the competitive marketplace is behaving in a manner that matches a lender’s perspective.  It can also shine a light into aspects of the market where policy changes may lead to superior results. In both circumstances, the information provided is invaluable in making the best decision in today’s highly-sensitive lending environment.

Published: October 11, 2009 by Kelly Kent

By: Kristan Keelan What do you think of when you hear the word “fraud”?  Someone stealing your personal identity?  Perhaps the recent news story of the five individuals indicted for gaining more than $4 million from 95,000 stolen credit card numbers?  It’s unlikely that small business fraud was at the top of your mind.   Yet, just like consumers, businesses face a broad- range of first- and third-party fraud behaviors, varying significantly in frequency, severity and complexity. Business-related fraud trends call for new fraud best practices to minimize fraud. First let’s look at first-party fraud.  A first-party, or victimless, fraud profile is characterized by having some form of material misrepresentation (for example, misstating revenue figures on the application) by the business owner without  that owner’s intent or immediate capacity to pay the loan item.  Historically, during periods of economic downturn or misfortune, this type of fraud is more common.  This intuitively makes sense — individuals under extreme financial pressure are more likely to resort to desperate measures, such as misstating financial information on an application to obtain credit. Third-party commercial fraud occurs when a third party steals the identification details of a known business or business owner in order to open credit in the business victim’s name.  With creditors becoming more stringent with credit-granting policies on new accounts, we’re seeing seasoned fraudsters shift their focus on taking over existing business or business owner identities. Overall, fraudsters seem to be migrating from consumer to commercial fraud.   I think one of the most common reasons for this is that commercial fraud doesn’t receive the same amount of attention as consumer fraud.  Thus, it’s become easier for fraudsters to slip under the radar by perpetrating their crimes through the commercial channel.   Also, keep in mind that businesses are often not seen as victims in the same way that consumers are.  For example, victimized businesses aren’t afforded the protections that consumers receive under identity theft laws, such as access to credit information.   These factors, coupled with the fact that business-to-business fraud is approximately three-to-ten times more “profitable” per occurrence than consumer fraud, play a role in leading fraudsters increasingly toward commercial fraud.

Published: September 24, 2009 by Guest Contributor

In a recent article, www.CNNMoney.com reported that Federal Reserve Chairman, Ben Bernanke, said that the pace of recovery in 2010 would be moderate and added that the unemployment rate would come down quite slowly, due to headwinds on ongoing credit problems and the effort by families to reduce household debt.’ While some media outlets promote an optimistic economic viewpoint, clearly there are signs that significant challenges lie ahead for lenders. As Bernanke forecasts, many issues that have plagued credit markets will sustain themselves in the coming years. Therefore lenders need to be equipped to monitor these continued credit problems if they wish to survive this protracted time of distress. While banks and financial institutions are implementing increasingly sophisticated and thorough processes to monitor fluctuations in credit trends, they have little intelligence to compare their credit performance to that of their peers.  Lenders frequently cite that they are concerned about their lack of awareness or intelligence regarding the credit performance and status of their peers.  Marketing intelligence solutions are important for management of risk, loan portfolio monitoring and related decisioning strategies. Currently, many vendors offer data on industry-wide trends, but few vendors provide the information needed to allow a lender to understand its position relative to a well-defined group of firms that it considers its peers. As a result, too many lenders are performing benchmarking using data sources that are biased, incomplete, inaccurate, or that lack the detail necessary to derive meaningful conclusions. If you were going to measure yourself personally against a group to understand your comparative performance, why would you perform that comparison against people who had little or nothing in common with you? Does an elite runner measure himself against a weekend warrior to gauge his performance? No; he segments the runners by gender, age, and performance class to understand exactly how he stacks up. Today’s lending environment is not forgiving enough for lenders to make broad industry comparisons if they want to ensure long-term success. Lenders cannot presume they are leading the pack, when, in fact, the race is closer than ever.  

Published: September 24, 2009 by Kelly Kent

Analysis opportunity for vintage analysis Vintage analysis, specifically vintage pools, present numerous useful opportunities for any firm seeking to further understand the risks within specific portfolios. While most lenders have relatively strong reporting and metrics at hand  for their own loan portfolio monitoring...these to understand the specific performance characteristics of their own portfolios -- the ability to observe trends and benchmark against similar industry characteristics can enhance their insights significantly. Assuming that a lender possesses the vintage data and vintage analysis capability necessary to perform benchmarking on its portfolio, the next step is defining the specific metrics upon which any comparisons will be made. As mentioned in a previous posting, three aspects of vintage performance are often used to define these points of comparison: Vintage delinquency including charge-off curves, which allows for an understanding of the repayment trends within each pool. Specifically, standard delinquency measures (such as 30+ Days Past Due (DPD), 60+ DPD, 90+ DPD, and charge-off rates) provide measures of early and late stage delinquencies in each pool. Payoff trends, which reflect the pace at which pools are being repaid. While planning for losses through delinquency benchmarking is a critical aspect of this process, so, too, is the ability to understand pre-repayment tendencies and trends. Pre-payment can significantly impact cash-flow modeling and can add insight to interest income estimates and loan duration calculations. As part of the Experian-Oliver Wyman Market Intelligence Reports, these metrics are delivered each quarter, and provide a consistent, static pool base upon which vintage benchmarks can be conducted. Clearly, this is a rather simplified perspective on what can be a very detailed analysis exercise. A properly conducted vintage analysis needs to consider aspects such as: lender portfolio mix at origination; lender portfolio footprint at origination; lender payoff trends and differences from benchmarked industry data in order to properly balance the benchmarked data against the lender portfolio.

Published: September 4, 2009 by Kelly Kent

-- by Heather Grover I’m often asked in various industry forums to give talks about, or opinions on, the latest fraud trends and fraud best practices. Let’s face it –  fraudsters are students of their craft and continue to study the latest defenses and adapt to controls that may be in place. You may be surprised, then, to learn that our clients’ top-of-mind issues are not only how to fight the latest fraud trends, but how they can do so while maximizing use of automation, managing operational costs, and preserving customer experience -- all while meeting compliance requirements. Many times, clients view these goals as being unique goals that do not affect one another. Not only can these be accomplished simultaneously, but, in my opinion, they can be considered causal. Let me explain. By looking at fraud detection as its own goal, automation is not considered as a potential way to improve this metric. By applying analytics, or basic fraud risk scores, clients can easily incorporate many different potential risk factors into a single calculation without combing through various data elements and reports. This calculation or score can predict multiple fraud types and risks with less effort, than could a human manually, and subjectively reviewing specific results. Through an analytic score, good customers can be positively verified in an automated fashion; while only those with the most risky attributes can be routed for manual review. This allows expensive human resources and expertise to be used for only the most risky consumers. Compliance requirements can also mandate specific procedures, resulting in arduous manual review processes. Many requirements (Patriot Act, Red Flag, eSignature) mandate verification of identity through match results. Automated decisioning based on these results (or analytic score) can automate this process – in turn, reducing operational expense. While the above may seem to be an oversimplification or simple approach, I encourage you to consider how well you are addressing financial risk management.  How are you managing automation, operational costs, and compliance – while addressing fraud?  

Published: August 30, 2009 by Guest Contributor

By: Kari Michel Bankruptcies continue to rise and are expected to exceed 1.4 million by the end of this year, according to American Bankruptcy Institute Executive Director, Samuel J. Gerdano.  Although, the overall bankruptcy rates for a lender’s portfolio is small (about 1 percent), bankruptcies result in high dollar losses for lenders.  Bankruptcy losses as a percentage of total dollar losses are estimated to range from 45 percent for bankcard portfolios to 82 percent for credit unions.  Additionally, collection activity is restricted because of legislation around bankruptcy.  As a result, many lenders are using a bankruptcy score in conjunction with their new applicant risk score to make better acquisition decisions. This concept is a dual score strategy.  It is key in management of risk, to minimize fraud, and in managing the cost of credit. Traditional risk scores are designed to predict risk (typically predicting 90 days past due or greater).  Although bankruptcies are included within this category, the actual count is relatively small.   For this reason the ability to distinguish characteristics typical of a “bankruptcy” are more difficult.  In addition, often times a consumer who filed bankruptcy was in “good standings” and not necessarily reflective of a typical risky consumer.   By separating out bankrupt consumers, you can more accurately identify characteristics specific to bankruptcy.  As mentioned previously, this is important because they account for a significant portion of the losses. Bankruptcy scores provide added value when used with a risk score. A matrix approach is used to evaluate both scores to determine effective cutoff strategies.   Evaluating applicants with both a risk score and a bankruptcy score can identify more potentially profitable applicants and more high- risk accounts.  

Published: August 28, 2009 by Guest Contributor

By: Wendy Greenawalt In my last blog post I discussed the value of leveraging optimization within your collections strategy. Next, I would like to discuss in detail the use of optimizing decisions within the account management of an existing portfolio. Account Management decisions vary from determining which consumers to target with cross-sell or up-sell campaigns to line management decisions where an organization is considering line increases or decreases.  Using optimization in your collections work stream is key. Let’s first look at lines of credit and decisions related to credit line management. Uncollectible debt, delinquencies and charge-offs continue to rise across all line of credit products. In response, credit card and home equity lenders have begun aggressively reducing outstanding lines of credit.    One analyst predicts that the credit card industry will reduce credit limits by $2 trillion by 2010. If materialized, that would represent a 45 percent reduction in credit currently available to consumers. This estimate illustrates the immediate reaction many lenders have taken to minimize loss exposure. However, lenders should also consider the long-term impacts to customer retention, brand-loyalty and portfolio profitability before making any account management decision. Optimization is a fundamental tool that can help lenders easily identify accounts that are high risk versus those that are profit drivers. In addition, optimization provides precise action that should be taken at the individual consumer level. For example, optimization (and optimizing decisions) can provide recommendations for: • when to contact a consumer; • how to contact a consumer; and • to what level a credit line could be reduced or increased... …while considering organizational/business objectives such as: • profits/revenue/bad debt; • retention of desirable consumers; and • product limitations (volume/regional). In my next few blogs I will discuss each of these variables in detail and the complexities that optimization can consider.  

Published: August 23, 2009 by Guest Contributor

By: Kari Michel This blog completes my discussion on monitoring new account decisions with a final focus: scorecard monitoring and performance.  It is imperative to validate acquisitions scorecards regularly to measure how well a model is able to distinguish good accounts from bad accounts. With a sufficient number of aged accounts, performance charts can be used to: • Validate the predictive power of a credit scoring model; • Determine if the model effectively ranks risk; and • Identify the delinquency rate of recently booked accounts at various intervals above and below the primary cutoff score. To summarize, successful lenders maximize their scoring investment by incorporating a number of best practices into their account acquisitions processes: 1. They keep a close watch on their scores, policies, and strategies to improve portfolio strength. 2. They create monthly reports to look at population stability, decision management, scoring models and scorecard performance. 3. They update their strategies to meet their organization’s profitability goals through sound acquisition strategies, scorecard monitoring and scorecard management.

Published: August 18, 2009 by Guest Contributor

By: Kari Michel This blog is a continuation of my previous discussion about monitoring your new account acquisition decisions with a focus on decision management. Decision management reports provide the insight to make more targeted decisions that are sound and profitable. These reports are used to identify: which lending decisions are consistent with scorecard recommendations; the effectiveness of overrides; and/or whether cutoffs should be adjusted. Decision management reports include: • Accept versus decline score distributions • Override rates • Override reason report • Override by loan officer • Decision by loan officer Successful lending organizations review this type of information regularly to make better lending policy decisions.  Proactive monitoring provides feedback on existing strategies and helps evaluate if you are making the most effective use of your score(s). It helps to identify areas of opportunity to improve portfolio profitability. In my next blog, I will discuss the last set of monitoring reports, scorecard performance.  

Published: August 6, 2009 by Guest Contributor

By: Tracy Bremmer In our last blog (July 30), we covered the first three stages of model development which are necessary whether developing a custom or generic model.  We will now discuss the next three stages, beginning with the “baking” stage:  scorecard development. Scorecard development begins as segmentation analysis is taking place and any reject inference (if needed) is put into place. Considerations for scorecard development are whether the model will be binned (divides predictive attributes into intervals) or continuous (variable is modeled in its entirety), how to account for missing values (or “false zeros”), how to evaluate the validation sample (hold-out sample vs. an out-of-time sample), avoidance of over-fitting the model, and finally what statistics will be used to measure scorecard performance (KS, Gini coefficient, divergence, etc.). Many times lenders assume that once the scorecard is developed, the work is done.   However, the remaining two steps are critical to development and application of a predictive model:  implementation/documentation and scorecard monitoring.   Neglecting these two steps is like baking a cake but never taking a bite to make sure it tastes good. Implementation and documentation is the last stage in developing a model that can be put to use for enhanced decisioning. Where the model will be implemented will determine the timeliness and complexity for when the models can be put into practice. Models can be developed in an in-house system, a third-party processor, a credit reporting agency, etc. Accurate documentation outlining the specifications of the model will be critical for successful implementation and model audits. Scorecard monitoring will need to be put into place once the model is developed, implemented and put into use. Scorecard monitoring evaluates population stability, scorecard performance, and decision management to ensure that the model is performing as expected over the course of time. If at any time there are variations based on initial expectations, then scorecard monitoring allows for immediate modifications to strategies. With all the right ingredients, the right approach, and the checks and balances in place, your model development process has the potential to come out “just right!”  

Published: August 4, 2009 by Guest Contributor

By: Tracy Bremmer In our last blog, we covered the first three stages of model development which are necessary whether developing a custom or generic model.  We will now discuss the next three stages, beginning with scorecard development. Scorecard development begins as segmentation analysis is taking place and any reject inference (if needed) is put into place. Considerations for scorecard development are whether the model will be binned (divides predictive attributes into intervals) or continuous (variable is modeled in its entirety), how to account for missing values (or “false zeros”), how to evaluate the validation sample (hold-out sample vs. an out-of-time sample), avoidance of over-fitting the model, and finally what statistics will be used to measure scorecard performance (KS, Gini coefficient, divergence, etc.). Many times lenders assume that once the scorecard is developed, the work is done.   However, the remaining two steps are critical to development and application of a predictive model:  implementation/documentation and scorecard monitoring.   Neglecting these two steps is like baking a cake but never taking a bite to make sure it tastes good. Implementation and documentation is the last stage in developing a model that can be put to use for enhanced decisioning. Where the model will be implemented will determine the timeliness and complexity for when the models can be put into practice. Models can be developed in an in-house system, a third-party processor, a credit reporting agency, etc. Accurate documentation outlining the specifications of the model will be critical for successful implementation and model audits. Scorecard monitoring will need to be put into place once the model is developed, implemented and put into use. Scorecard monitoring evaluates population stability, scorecard performance, and decision management to ensure that the model is performing as expected over the course of time. If at any time there are variations based on initial expectations, then scorecard monitoring allows for immediate modifications to strategies. With all the right ingredients, the right approach, and the checks and balances in place, your model development process has the potential to come out “just right!”  

Published: July 30, 2009 by Guest Contributor

By: Wendy Greenawalt When consulting with lenders, we are frequently asked what credit attributes are most predictive and valuable when developing models and scorecards. Because we receive this request often, we recently decided to perform the arduous analysis required to determine if there are material differences in the attribute make up of a credit risk model based on the portfolio on which it is applied. The process we used to identify the most predictive attributes was a combination of art and sciences -- for which our data experts drew upon their extensive data bureau experience and knowledge obtained through engagements with clients from all types of industries. In addition, they applied an empirical process which provided statistical analysis and validation of the credit attributes included. Next, we built credit risk models for a variety of portfolios including bankcard, mortgage and auto and compared the credit attribute included in each. What we found is that there are some attributes that are inherently predictive regardless for which portfolio the model was being developed. However, when we took the analysis one step further, we identified that there can be significant differences in the account-level data when comparing different portfolio models. This discovery pointed to differences, not just in the behavior captured with the attributes, but in the mix of account designations included in the model. For example, in an auto risk model, we might see a mix of attributes from all trades, auto, installment and personal finance…as compared to a bankcard risk model which may be mainly comprised of bankcard, mortgage, student loan and all trades.  Additionally, the attribute granularity included in the models may be quite different, from specific derogatory and public record data to high level account balance or utilization characteristics. What we concluded is that it is a valuable exercise to carefully analyze available data and consider all the possible credit attribute options in the model-building process – since substantial incremental lift in model performance can be gained from accounts and behavior that may not have been previously considered when assessing credit risk.  

Published: July 30, 2009 by Guest Contributor

By: Tracy Bremmer Preheat the oven to 350 degrees. Grease the bottom of your pan. Mix all of your ingredients until combined. Pour mixture into pan and bake for 35 minutes. Cool before serving. Model development, whether it is a custom or generic model, is much like baking. You need to conduct your preparatory stages (project design), collect all of your ingredients (data), mix appropriately (analysis), bake (development), prepare for consumption (implementation and documentation) and enjoy (monitor)! This blog will cover the first three steps in creating your model! Project design involves meetings with the business users and model developers to thoroughly investigate what kind of scoring system is needed for enhanced decision strategies. Is it a credit risk score, bankruptcy score, response score, etc.? Will the model be used for front-end acquisition, account management, collections or fraud? Data collection and preparation evaluates what data sources are available and how best to incorporate these data elements within the model build process. Dependent variables (what you are trying to predict) and the type of independent variables (predictive attributes) to incorporate must be defined. Attribute standardization (leveling) and attribute auditing occur at this point. The final step before a model can be built is to define your sample selection. Segmentation analysis provides the analytical basis to determine the optimal population splits for a suite of models to maximize the predictive power of the overall scoring system. Segmentation helps determine the degree to which multiple scores built on an individual population can provide lift over building just one single score. Join us for our next blog where we will cover the next three stages of model development:  scorecard development; implementation/documentation; and scorecard monitoring.

Published: July 30, 2009 by Guest Contributor

By: Kari Michel In my last blog I gave an overview of monitoring reports for new account acquisition decisions listing three main categories that reports typically fall into:  (1) population stability; (2) decision management; (3) scorecard performance. Today, I want to focus on population stability.   Applicant pools may change over time as a result of new marketing strategies, changes in product mix, pricing updates, competition, economic changes or a combination of these. Population stability reports identify acquisition trends and the degree to which the applicant pool has shifted over time, including the scorecard components driving the shift in custom credit scoring models. Population stability reports include: • Actual versus expected score distribution • Actual versus expected scorecard characteristics distributions (available with custom models) • Mean applicant scores • Volumes, approval and booking rates These types of reports provide information to help monitor trends over time, rather than spikes from month to month.  Understanding the trends allows one to be proactive in determining if the shifts warrant changes to lending policies or cut-off scores. Population stability is only one area that needs to be monitored; in my next blog I will discuss decision management reports.  

Published: July 30, 2009 by Guest Contributor

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe