By: Tracy Bremmer In our last blog (July 30), we covered the first three stages of model development which are necessary whether developing a custom or generic model. We will now discuss the next three stages, beginning with the “baking” stage: scorecard development. Scorecard development begins as segmentation analysis is taking place and any reject inference (if needed) is put into place. Considerations for scorecard development are whether the model will be binned (divides predictive attributes into intervals) or continuous (variable is modeled in its entirety), how to account for missing values (or “false zeros”), how to evaluate the validation sample (hold-out sample vs. an out-of-time sample), avoidance of over-fitting the model, and finally what statistics will be used to measure scorecard performance (KS, Gini coefficient, divergence, etc.). Many times lenders assume that once the scorecard is developed, the work is done. However, the remaining two steps are critical to development and application of a predictive model: implementation/documentation and scorecard monitoring. Neglecting these two steps is like baking a cake but never taking a bite to make sure it tastes good. Implementation and documentation is the last stage in developing a model that can be put to use for enhanced decisioning. Where the model will be implemented will determine the timeliness and complexity for when the models can be put into practice. Models can be developed in an in-house system, a third-party processor, a credit reporting agency, etc. Accurate documentation outlining the specifications of the model will be critical for successful implementation and model audits. Scorecard monitoring will need to be put into place once the model is developed, implemented and put into use. Scorecard monitoring evaluates population stability, scorecard performance, and decision management to ensure that the model is performing as expected over the course of time. If at any time there are variations based on initial expectations, then scorecard monitoring allows for immediate modifications to strategies. With all the right ingredients, the right approach, and the checks and balances in place, your model development process has the potential to come out “just right!”
By: Tracy Bremmer In our last blog, we covered the first three stages of model development which are necessary whether developing a custom or generic model. We will now discuss the next three stages, beginning with scorecard development. Scorecard development begins as segmentation analysis is taking place and any reject inference (if needed) is put into place. Considerations for scorecard development are whether the model will be binned (divides predictive attributes into intervals) or continuous (variable is modeled in its entirety), how to account for missing values (or “false zeros”), how to evaluate the validation sample (hold-out sample vs. an out-of-time sample), avoidance of over-fitting the model, and finally what statistics will be used to measure scorecard performance (KS, Gini coefficient, divergence, etc.). Many times lenders assume that once the scorecard is developed, the work is done. However, the remaining two steps are critical to development and application of a predictive model: implementation/documentation and scorecard monitoring. Neglecting these two steps is like baking a cake but never taking a bite to make sure it tastes good. Implementation and documentation is the last stage in developing a model that can be put to use for enhanced decisioning. Where the model will be implemented will determine the timeliness and complexity for when the models can be put into practice. Models can be developed in an in-house system, a third-party processor, a credit reporting agency, etc. Accurate documentation outlining the specifications of the model will be critical for successful implementation and model audits. Scorecard monitoring will need to be put into place once the model is developed, implemented and put into use. Scorecard monitoring evaluates population stability, scorecard performance, and decision management to ensure that the model is performing as expected over the course of time. If at any time there are variations based on initial expectations, then scorecard monitoring allows for immediate modifications to strategies. With all the right ingredients, the right approach, and the checks and balances in place, your model development process has the potential to come out “just right!”
There were always questions around the likelihood that the August 1, 2009 deadline would stick. Well, the FTC has pushed out the Red Flag Rules compliance deadline to November 1, 2009 (from the previously extended August 1, 2009 deadline). This extension is in response to pressures from Congress – and, likely, "lower risk" businesses questioning their being covered under the Red Flag Rule to begin with (businesses such as those related to healthcare, retailers, small businesses, etc). Keep in mind that the FTC extension on enforcement of Red Flag Guidelines does not apply to address discrepancies on credit profiles, and that those discrepancies are expected to be worked TODAY. Risk management strategies are key to your success. To view the entire press release, visit: http://www.ftc.gov/opa/2009/07/redflag.shtm
By: Wendy Greenawalt When consulting with lenders, we are frequently asked what credit attributes are most predictive and valuable when developing models and scorecards. Because we receive this request often, we recently decided to perform the arduous analysis required to determine if there are material differences in the attribute make up of a credit risk model based on the portfolio on which it is applied. The process we used to identify the most predictive attributes was a combination of art and sciences -- for which our data experts drew upon their extensive data bureau experience and knowledge obtained through engagements with clients from all types of industries. In addition, they applied an empirical process which provided statistical analysis and validation of the credit attributes included. Next, we built credit risk models for a variety of portfolios including bankcard, mortgage and auto and compared the credit attribute included in each. What we found is that there are some attributes that are inherently predictive regardless for which portfolio the model was being developed. However, when we took the analysis one step further, we identified that there can be significant differences in the account-level data when comparing different portfolio models. This discovery pointed to differences, not just in the behavior captured with the attributes, but in the mix of account designations included in the model. For example, in an auto risk model, we might see a mix of attributes from all trades, auto, installment and personal finance…as compared to a bankcard risk model which may be mainly comprised of bankcard, mortgage, student loan and all trades. Additionally, the attribute granularity included in the models may be quite different, from specific derogatory and public record data to high level account balance or utilization characteristics. What we concluded is that it is a valuable exercise to carefully analyze available data and consider all the possible credit attribute options in the model-building process – since substantial incremental lift in model performance can be gained from accounts and behavior that may not have been previously considered when assessing credit risk.
By: Tracy Bremmer Preheat the oven to 350 degrees. Grease the bottom of your pan. Mix all of your ingredients until combined. Pour mixture into pan and bake for 35 minutes. Cool before serving. Model development, whether it is a custom or generic model, is much like baking. You need to conduct your preparatory stages (project design), collect all of your ingredients (data), mix appropriately (analysis), bake (development), prepare for consumption (implementation and documentation) and enjoy (monitor)! This blog will cover the first three steps in creating your model! Project design involves meetings with the business users and model developers to thoroughly investigate what kind of scoring system is needed for enhanced decision strategies. Is it a credit risk score, bankruptcy score, response score, etc.? Will the model be used for front-end acquisition, account management, collections or fraud? Data collection and preparation evaluates what data sources are available and how best to incorporate these data elements within the model build process. Dependent variables (what you are trying to predict) and the type of independent variables (predictive attributes) to incorporate must be defined. Attribute standardization (leveling) and attribute auditing occur at this point. The final step before a model can be built is to define your sample selection. Segmentation analysis provides the analytical basis to determine the optimal population splits for a suite of models to maximize the predictive power of the overall scoring system. Segmentation helps determine the degree to which multiple scores built on an individual population can provide lift over building just one single score. Join us for our next blog where we will cover the next three stages of model development: scorecard development; implementation/documentation; and scorecard monitoring.
By: Kari Michel In my last blog I gave an overview of monitoring reports for new account acquisition decisions listing three main categories that reports typically fall into: (1) population stability; (2) decision management; (3) scorecard performance. Today, I want to focus on population stability. Applicant pools may change over time as a result of new marketing strategies, changes in product mix, pricing updates, competition, economic changes or a combination of these. Population stability reports identify acquisition trends and the degree to which the applicant pool has shifted over time, including the scorecard components driving the shift in custom credit scoring models. Population stability reports include: • Actual versus expected score distribution • Actual versus expected scorecard characteristics distributions (available with custom models) • Mean applicant scores • Volumes, approval and booking rates These types of reports provide information to help monitor trends over time, rather than spikes from month to month. Understanding the trends allows one to be proactive in determining if the shifts warrant changes to lending policies or cut-off scores. Population stability is only one area that needs to be monitored; in my next blog I will discuss decision management reports.
By: Wendy Greenawalt On any given day, US credit bureaus contain consumer trade data on approximately four billion trades. Interpreting data and defining how to categorize the accounts and build attributes, models and decisioning tools can and does change over time, due to the fact that the data reported to the bureaus by lenders and/or servicers also changes. Over the last few years, new data elements have enabled organizations to create attributes to identify very specific consumer behavior. The challenge for organizations is identifying what reporting changes have occurred and the value that the new consumer data can bring to decisioning. For example, a new reporting standard was introduced nearly a decade ago which enabled lenders to report if a trade was secured by money or real property. Before the change, lenders would report the accounts as secured trades making it nearly impossible to determine if the account was a home equity line of credit or a secured credit card. Since then, lender reporting practices have changed and, now, reports clearly state that home equity lines of credit are secured by property making it much easier to delineate the two types of accounts from one another. By taking advantage of the most current credit bureau account data, lenders can create attributes to capture new account types. They can also capture information (such as: past due amounts; utilization; closed accounts and derogatory information including foreclosure; charge-off and/or collection data) to make informed decisions across the customer life cycle.
Vintage analysis 101 The title of this edition, ‘The risk within the risk’ is a testament to the amount of information that can be gleaned from an assessment of the performances of vintage analysis pools. Vintage analysis pools offer numerous perspectives of risk. They allow for a deep appreciation of the effects of loan maturation, and can also point toward the impact of external factors, such as changes in real estate prices, origination standards, and other macroeconomic factors, by highlighting measurable differences in vintage to vintage performance. What is a vintage pool? By the Experian definition, vintage pools are created by taking a sample of all consumers who originated loans in a specific period, perhaps a certain quarter, and tracking the performance of the same consumers and loans through the life of each loan. Vintage pools can be analyzed for various characteristics, but three of the most relevant are: * Vintage delinquency, which allows for an understanding of the repayment trends within each pool; * Payoff trends, which reflect the pace at which pools are being repaid; and * Charge-off curves, which provide insights into the charge-off rates of each pool. The credit grade of each borrower within a vintage pool is extremely important in understanding the vintage characteristics over time, and credit scores are based on the status of the borrower just before the new loan was originated. This process ensures that the new loan origination and the performance of the specific loan do not influence the borrower’s credit score. By using this method of pooling and scoring, each vintage segment contains the same group of loans over time – allowing for a valid comparison of vintage pools and the characteristics found within. Once vintage pools have been defined and created, the possibilities for this data are numerous... Read more about our analysis opportunities for vintage analysis and our recent findings on vintage analysis.
-- by Jeff BernsteinSo, here I am with my first contribution to Experian Decision Analytics’ collections blog, and what I am discussing has practically nothing to do with analytics. But, it has everything to do with managing the opportunities to positively impact collections results and leveraging your investment in analytics and strategies, beginning with the most important weapon in your arsenal – collectors.Yes, I know it’s a bit unconventional for a solutions and analytics company to talk about something other than models; but the difference between mediocre results and optimization rests with your collectors and your organization’s ability to manage customer interactions.Let’s take a trip down memory lane and reminisce about one of the true landscape changing paradigm shifts in collections in recent memory – the use of skill models to become payment of choice.AT&T Universal Card was one of the first early adopters of a radical new approach towards managing an emerging Gen X debtor population during the early 1990s. Armed with fresh research into what influenced delinquent debtors into paying certain collectors while dogging others, they adopted what we called a “management systems” approach towards collections.They taught their entire collections team a new set of skills models that stressed bridging skills between the collector and the customer, thus allowing the collector to interact in a more collaborative, non-aggressive manner. The new approach enabled collectors to more favorably influence customer behavior, creating payment solutions collaboratively that allowed AT&T to become “payment of choice” when competing with other creditors competing for share of wallet.A new of set of skill metrics, which we now affectionately call our “dashboard,” were created to measure the effective use of the newly taught skill models, and collectors were empowered to own their own performance – and to leverage their team leader for coaching and skills development. Team developers, the new name for front line collection managers, were tasked with spending 40-50% or more of their time on developmental activities, using leadership skills in their coaching and development activities. The game plan was simple.• Engage collectors with customer focused skills that influenced behavior and get paid sooner.• Empower collectors to take on the responsibility for their own development.• Make performance results visible top-to-bottom in the organization to stimulate competitiveness, leveraging our innate desire for recognition. • Make leaders accountable for continuous performance improvement of individuals and teams.It worked. AT&T Universal won the Malcom Baldrige National Quality Award in 1992 for its efforts in “delighting the customer” while driving their delinquencies and charge-offs to superior levels. A new paradigm shift was unleashed and spread like wildfire across the industry, including many of the major credit card issuers and top tier U.S. banks, and large retailers.Why do I bring this little slice of history up in my first blog?I see many banking and financial services companies across the globe struggle with more complex customer situations and harder collections cases -- with their attention naturally focused on tools, models, and technologies. As an industry, we are focused on early lifecycle treatment strategy, identifying current, non-delinquent customers who may be at-risk for future default, and triaging them before they become delinquent. Risk-based collections and segmentation is now a hot topic. Outsourcing and leveraging multiple, non-agent based contact channels to reduce the pressures on collection resources is more important than ever. Optimization is getting top billing as the next “thing.”What I don’t hear enough of is how organizations are engaged in improving the skills of collectors, and executing the right management systems approach to the process to extract the best performance possible from our existing resources. In some ways, this may be lost in the chaos of our current economic climate. With all the focus on analytics, segmentation, strategy and technology, the opportunity to improve operational performance through skill building and leadership may have taken a back seat.I’ve seen plenty of examples of organizations who have spent millions on analytical tools and technologies, improving portfolio risk strategy and targeting of the right customers for treatment. I’ve seen the most advanced dialer, IVR, and other contact channel strategies used successfully to obtain the highest right party contact rates and the lowest possible cost. Yet, with all of that focus and investment, I’ve seen these right party contacts mismanaged by collectors who were not provided with the optimal coaching and skills.With the enriched data available for decisioning, coupled with the amazing capabilities we have for real time segmentation, strategy scripting, context-sensitive screens, and rules-based workflow management in our next generation collections systems, we are at a crossroads in the evolution of collections.Let’s not forget some of the “nuts and bolts” that drive operational performance and ensure success.Something old can be something new. Examine your internal processes aimed at producing the best possible skills at all collector levels and ensure that you are not missing the easiest opportunity to improve your results.
By: Tom Hannagan Some articles that I’ve come across recently have puzzled me. In those articles, authors use the terms “monetary base” and “money supply” synonymously -- but those terms are actually very different. The monetary base (currency plus Fed deposits) is a much smaller number than the money supply (M1). The huge change in the “base”, which the Fed did affect by adding $1T or so to infuse a lot of quick liquidity into the financial system late in 2007/early 2008, does not necessarily impact M1 (which includes the base plus all bank demand deposits) all that much in the short-term, and may impact it even less in the intermediate-term if the Fed reduces its holdings of securities. Some are correct, of course, in positing that a rotation out of securities by the Fed will tend to put pressure on market rates. Some are equivocating the 2007 liquidity moves of the Fed, with a major monetary policy change. When the capital markets froze due to liquidity and credit risks in August/September of 2007, monetary policy was not the immediate risk, or even a consideration. Without the liquidity injections in that timeframe, monetary policy would have become less than an academic consideration. Tying the “constrained” (which actually was a slowdown in growth of) bank lending to bank reserves on account at the Fed I don’t think their Fed reserve balance was ever an issue for lending. Banks slowed down lending because the level of credit risk increased. Borrowers were defaulting. Bank deposit balances were actually increasing through the financial crisis. [See my Feb 26 and March 5 blogs] So, loan funding, at least from deposit sources was not the problem for most banks. Of course, for a small number of banks that had major securities losses, capital was being lost and therefore not available to back increased lending. But demand deposit balances were growing. Some authors are linking bank reserves to the ability of banks to raise liabilities, which makes little sense. Banks’ respective abilities to gather demand deposits (insured by the FDIC, at no small expense to the banks) was always wide open, and their ability to borrow funds is much more a function of asset quality (or net asset value) more than it relates their relatively small reserve balances at the Fed. These actions may result in high inflation levels and high interest rates -- but it will be because of poor Fed decisions in the future, not because of the Fed’s action of last year. It will also depend on whether the fiscal (deficit) actions of the government are: 1) economically productive and 2) tempered to a recovery, or not. I think that is a bigger macro-economic risk than Fed monetary policy. In fact, the only way bank executives can wisely manage the entity over an extended timeframe is to be able to direct resources across all possibilities on a risk-adjusted basis. The question isn’t whether risk-based pricing is appropriate for all lines of business, but rather how might or should it be applied. For commercial lending into the middle and corporate markets, there is enough money at stake to warrant evaluating each loan and deposit, as well as the status of the client relationship, on an individual basis. This means some form of simulation modeling by relationship managers on new sales opportunities (including renewals) and the model’s ready access to current data on all existing pieces of business with each relationship. [See my April 24 blog entry.] This process also implies the ability to easily aggregate the risk-return status of a group of related clients and to show lenders how their portfolio of accounts is performing on a risk-adjusted basis. This type of model-based analysis needs to be flexible enough to handle differing loan structures, easy for a lender to use and quick. The better models can perform such analysis in minutes. I’ve discussed the elements of such models in earlier posts. But, with small business and consumer lending there are other considerations that come into play. The principles of risk-based pricing are consistent across any loan or deposit. With small business lending, the process of selling, negotiating, underwriting and origination is significantly more streamlined and under some form of workflow control. With consumer lending, there are more regulations to take into account and there are mass marketing considerations driving the “sales” process. Agreement covers what the new owner wants now and may decide it wants in the future. This a form of strategic business risk that comes with accepting the capital infusion from this particular source.
In recent months, the topics of stress-testing and loss forecasting have been at the forefront of the international media and, more importantly, at the forefront of the minds of American banking executives. The increased involvement of the federal government in managing the balance sheets of the country’s largest banks has mixed implications for financial institutions in this country. On one hand, some banks have been in the practice of building macroeconomic scenarios for years and have tried and tested methods for risk management and loss forecasting. On the other hand, in financial institutions where these practices were conducted in a less methodical manner, if at all, the scrutiny placed on capital adequacy forecasting has left many looking to quickly implement standards that will address regulatory concerns when their number is called. For those clients to whom this process is new, or for those who do not possess a methodology that would withstand the examination of federal inspectors, the question seems to be – where do we begin? I think that before you can understand where you’re going, you must first understand where you are and where you have been. In this case, it means having a detailed understanding of key industry and peer benchmarks and your relative position to those benchmarks. Even simple benchmarking exercises provide answers to some very important questions. • What is my risk profile versus that of the industry? • How does the composition of my portfolio differ from that of my peers? • How do my delinquencies compare to those of my peers? How has this position been changing? By having a thorough understanding of one’s position in these challenging circumstances, it allows for a more educated foundation upon which to build assessments of the future.
By: Kari Michel Are you using scores to make new applicant decisions? Scoring models need to be monitored regularly to ensure a sound and successful lending program. Would you buy a car and run it for years without maintenance -- and expect it to run at peak performance? Of course not. Just like oil changes or tune-ups, there are several critical components that need to be addressed regarding your scoring models on a regular basis. Monitoring reports are essential for organizations to answer the following questions: • Are we in compliance? • How is our portfolio performing? • Are we making the most effective use of your scores? To understand how to improve your portfolio performance, you must have good monitoring reports. Typically, reports fall into one of three categories: (1) population stability, (2) decision management, (3) scorecard performance. Having the right information will allow you to monitor and validate your underwriting strategies and make any adjustments when necessary. Additionally, that information will let you know that your scorecards are still performing as expected. In my next blog, I will discuss the population stability report in more detail.
By: Tracy Bremmer It’s not really all about the credit score. Now don’t get me wrong, a credit score is a very important tool used in credit decision making; however there’s so much more that lenders use to say “accept” or “decline.” Many lenders segment their customer/prospect base prior to ever using the score. They use credit-related attributes such as, “has this consumer had a bankruptcy in the last two years?” or “do they have an existing mortgage account?” to segment out consumers into risk-tier buckets. Lenders also evaluate information from the application such as income or number of years at current residence. These types of application attributes help the lender gain insight that is not typically evaluated in the traditional risk score. For lenders who already have a relationship with a customer, they will look at their existing relationships with that customer prior to making a decision. They’ll look at things like payment history and current product mix to better understand who best to cross-sell, up-sell, or in today’s economy, down-sell. In addition, many lenders will run the applicant through some type of fraud database to ensure the person really is who they say they are. I like to think of the score as the center of the decision, with all of these other metrics as necessary inputs to the entire decision process. It is like going out for an ice cream sundae and starting with the vanilla and needing all the mix-ins to make it complete.
-- By Kari Michel What is your credit risk score? Is it 300, 700, 900 or something in between? In order to understand what it means, you need to know which score you are referencing. Lenders use many different scoring models to determine who qualifies for a loan and at what interest rate. For example, Experian has developed many scores, such as VantageScore®. Think of VantageScore® as just one of many credit scores available in the marketplace. While all credit risk models have the same purpose, to use credit information to assess risk, each credit model is unique in that each one has its own proprietary formula that combines and calculates various credit information from your credit report. Even if lenders used the same credit risk score, the interpretation of risk depends on the lender, and their lending policies and criteria may vary. Additionally, each credit risk model has its own score range as well. While the score range may be relatively similar to another score range, the meaning of the score may not necessarily be the same. For example, a 640 in one score may not mean the same thing or have the same credit risk as a 640 for another score. It is also possible for two different scores to represent the same level of risk. If you have a good credit score with one lender, you will likely have a good score with other lenders, even if the number is different.
As I've suggested in previous postings, we've certainly expected more clarifying language from the Red Flags Rule drafting agencies. Well, here is some pretty good information in the form of another FAQ document created by the Board of Governors of the Federal Reserve System (FRB), Federal Deposit Insurance Corporation (FDIC), National Credit Union Administration (NCUA), Office of the Comptroller of the Currency (OCC), Office of Thrift Supervision (OTS), and Federal Trade Commission (FTC). This is a great step forward in responding to many of the same Red Flag guidelines questions that we get from our clients, and I hope it's not the last one we see. You can access the document via any of the agency website, but for quick reference, here is the FDIC version: http://www.fdic.gov/news/news/press/2009/pr09088.html