Latest Posts

Loading...

The automotive loan market continued to improve, with lenders showing more willingness to lend outside of prime. In Q4 2011, average credit scores for new and used auto loans dropped when compared with Q4 2010. Additionally, the percentage of loans to customers with nonprime, subprime or deep-subprime credit scores increased. Average credit scores for new vehicle loans dropped six points, from 767 in Q4 2010 to 761 in Q4 2011 Average credit scores for used vehicle loans dropped nine points, from 679 in Q4 2010 to 670 in Q4 2011 New vehicle loans to nonprime, subprime and deep-subprime customers increased by 13.8 percent from Q4 2010 to Q4 2011 View our recent Webinar on the Q4 2011 state of the automotive market. Source: Experian Automotive's quarterly credit trend analysis. Download the quarterly studies and white papers.

Published: April 6, 2012 by Guest Contributor

Despite low demand and a shrinking pool of qualified candidates, loan growth priorities continue to rank high for most small-business lenders – both for small to midsize banks and large financial institutions. Between 2006 and 2010, overall loan applications were down 5 percent where large financial institutions saw small-business loan applications rise 36.5 percent. Banks and credit unions with assets less than $500 million showed the most significant increase of 65 percent. Read more of the blog series: "Getting back in the game – Generating small business applications." Source: Decision Analytics' Blog: Relationship and Transactional Lending Best Practices

Published: April 4, 2012 by Guest Contributor

Last month, I wrote about seeking ways to ensure growth without increasing risk.  This month, I’ll present a few approaches that use multiple scores to give a more complete view into a consumer’s true profile. Let’s start with bankruptcy scores. You use a risk score to capture traditional risk, but bankruptcy behavior is significantly different from a consumer profile perspective. We’ve seen a tremendous amount of bankruptcy activity in the market. Despite the fact that filings were slightly lower than 2010 volume, bankruptcies remain a serious threat with over 1.3 million consumer filings in 2011; a number that is projected for 2012.  Factoring in a bankruptcy score over a traditional risk score, allows better visibility into consumers who may be “balance loading”, but not necessarily going delinquent, on their accounts. By looking at both aspects of risk, layering scores can identify consumers who may look good from a traditional credit score, but are poised to file bankruptcy. This way, a lender can keep their approval rates up and lower risk of overall dollar losses. Layering scores can be used in other areas of the customer life cycle as well. For example, as new lending starts to heat up in markets like Auto and Bankcard, adding a next generation response score to a risk score in your prospecting campaigns, can translate into a very clear definition of the population you want to target. By combining a prospecting score with a risk score to find credit worthy consumers who are most likely to open, you help mitigate the traditional inverse relationship between open rates and credit worthiness. Target the population that is worth your precious prospecting resources. Next time, we’ll look at other analytics that help complete our view of consumer risk. In the meantime, let me know what scoring topics are on your mind.

Published: April 3, 2012 by Veronica Herrera

The strongest growth in new bankcard accounts is occurring in the near-prime and subprime segments of VantageScore® credit score C, D and F. Year-over-year (Q1 2011 over Q1 2010) growth rates of 20 percent, 46 percent and 53 percent were observed for each of the respective tiers. Listen to our recent webinar featuring bankcard credit trends Source: Experian-Oliver Wyman Market Intelligence Reports

Published: April 2, 2012 by Guest Contributor

The Consumer Financial Protection Bureau (CFPB) now has the ability to write and enforce 18 consumer protection laws that guide financial products and services. The new regulator has signaled the following issues as priorities: Clarity on how credit scores affect lender decisions: Beginning July 21, 2011, lenders were required to disclose the credit score that they used in all risk-based pricing notices and adverse action notices Shorter and simpler consumer disclosure forms: One of the first priorities is to make the terms and conditions associated with purchasing a mortgage or applying for a credit card shorter and clearer Enforcing the Fair Debt Collection Practices Act: The CFPB will enforce the Fair Debt Collection Practices Act and review current debt collector practices Learn more about the CFPB  

Published: March 30, 2012 by Guest Contributor

  Auditing provides the organization with assurance that all financial controls are in place to ensure that trust account funds are maintained, access to financial records at the vendor location is tightly controlled, customer data is secure, and that the vendor is in full compliance with contractual requirements (i.e., minimum account servicing requirements for issues such as vendor actions following initial placement, ongoing contact efforts, remittance processing, settlement authorizations, etc.). There are two basic auditing processes that occur from a best practices perspective. The first one is fairly common, and involves use of an on-site audit. Onsite Auditing The onsite audit is required to examine financial records involving customer transactions, accounting processes, trust accounts and remittances, IT and security for handling sensitive customer information and records, and documentation of the processes that have occurred towards customer communication, contact efforts and results. There are differences of opinion as to both the frequency of onsite audits, and whether they should be announced or unannounced. Frequency is primarily an issue of the size of the overall vendor relationship and the degree of financial risk associated with a default, typically involving the trust account and legal exposure to litigation and brand reputation risk. Most utilities should follow an annual onsite audit schedule due to size of portfolio and attendant financial exposure. While surprise audits have the advantage of reducing any opportunity by the vendor to potentially alter records to cover up what would normally be audit violations, it does result in a more time-consuming audit schedule onsite, as the vendor is unable to prepare and have documentation available. The use of substantial online documentation lessens this issue, but nonetheless, there is more work to do onsite when nothing has been captured for review in advance. The other issue is that key managers from the vendor may not be available on the dates of a surprise audit. One way in which organizations chose to resolve the issue of surprise versus planned audit is to conduct one pre-planned and announced audit annually, and a smaller scaled unannounced onsite audit annually at least 2 quarters separated from the planned audit. The use of remote auditing and monitoring lessens the requirements for onsite audits in certain areas, other than IT / Security, financial and compliance. Remote Auditing / Monitoring Remote auditing and monitoring of vendors is a stronger tool for credit grantors to improve performance of agencies, in that it combines some of the same auditing checks to ensure compliance and quality of collections with an added value of focus on performance. The best tool in the arsenal is the use of digital logging to capture the collection call between the vendor and customers. When digital logging technology first came into use, it was cost prohibitive for collection agencies, and not fully accepted by the industry due to its possible use in litigation against the agencies. Over the past decade, as the costs of deployment have lessened, and agencies became more attuned to customer satisfaction and full FDCPA compliance, more agencies have installed digital logging into their call centers. Most agencies that utilize digital logging capture every customer contact effort that results in either a right party contact, or third party contact for messages, etc. Many of those agencies will tag each of these voice records as to whether they were a right party contact or not. The intent is to be able to audit more frequently those calls that were right party contacts, so that collector skills can be assessed in terms of how they managed the customer, their ability to create and sell payment solution, and ability to negotiate successfully with the customer. Typically, an organization that is using digital logging will have an internal quality assurance team that will create a compliance audit process that establishes a number of right party and non-right party contacts per collector. These calls are typically evaluated using a scorecard approach, and used for training purposes. They might alter the frequencies of monitoring based upon the tenure of the collector, their performance levels, and the results of prior monitoring, such that sample sizes are affected. Utilities and other credit grantors should have access to the entire inventory of raw digital log files for their projects and randomly audit them remotely. These should not be “assembled” or pre-selected by the vendor, so that there is no undue influence in deciding which calls / collectors to review. If the vendor is using a QA monitoring / compliance program and separately maintains a scorecard driven monitoring process, the credit grantor should have access where those calls and evaluations related to their own customers and contacts. The intent of the process is to listen for both quality of the communication with the customer from the perspective of effectiveness, but also to ensure compliance with FDCPA and that the customer was treated with respect and appropriately. Typically, a credit grantor would establish a remote monitoring schedule where x calls across the agent pool handing their assigned portfolio are monitored and evaluated with a simple checklist. Ideally, the credit grantor should be listening to a range of 100-150 right party contacts per month, based upon overall inventory and activity, but typically no more than 1-2% of the total right party contacts per month on their customer accounts. The credit grantor should provide monthly feedback to the agency regarding the monitoring of quality and compliance, noting any compliance or policy violations, and any concerns relating to customer contact quality and performance. Other aspects of the remote auditing that occurs each month should include ensuring that all servicing requirements are met. As an example, we would expect that the agency should perform various external checks for bankruptcy, deceased, etc. before initiating contact once the file has been received from the credit grantor as a placement. These database checks should be performed within the first 24 hours once placed, and then a letter of representation should be sent to those not flagged. The initial call to the customer should also occur within 24-48 hours from placement by the credit grantor, subject to phone number availability. Skip trace processes should commence immediately on those without phone numbers. There are requirements for how often accounts are attempted for contact, and settlement authorizations. All of these requirements are subject to the remote monitoring that is performed to ensure these are met.    Coming soon...  Best practices for improving agency performance.

Published: March 29, 2012 by Guest Contributor

Even as interest rates remain at near-record lows, mortgage originations declined for the second quarter in a row in Q2 2011 to $268 billion, a 19 percent decline over the previous quarter. Refinance activity that spurred originations in 2010 has not been as prevalent this year. Listen to our recent Webinar on consumer credit trends and retail spending. Source: Experian-Oliver Wyman Market Intelligence Reports.  

Published: March 28, 2012 by Guest Contributor

By: Mike Horrocks Henry Ford is credited to have said “Coming together is a beginning. Keeping together is progress. Working together is success.”   This is so true with risk management, as you may consider bringing in different business units, policies, etc., into a culture of enterprise risk management.  Institutions that understand the concept of strength from unity are able to minimize risks at all levels, and not be exposed in unfamiliar areas. So how can this apply in your organization?  Is your risk management process united across all different business lines or are there potential chinks in your armor?  Are you using different guidelines to manage risk as it comes in the door, versus how you are looking at it once it is part of the portfolio, or are they closely unified in purpose? Now don’t get me wrong, I am not saying that blind cohesion is right for every risk management issue, but getting efficiencies and consistencies can do wonders for your overall risk management process.  Here are some great questions to help you evaluate where you are: Is there a well-understood risk management approach in place across the institution? How confident are you that risk management is a core competence of your institution? Does risk management run through the veins of the institution, or is it regarded as the domain of auditors and compliance? A review of these questions may bring you closer to being one in purpose when it comes to your risk management processes.  And while that oneness may not bring you Zen-like inner peace, it will bring your portfolio managers at least a little less stress.

Published: March 27, 2012 by Guest Contributor

VantageScore® Solutions LLC polled risk professionals about how they are measuring score performance, and 60 percent of respondents said they are now using metrics beyond the Kolmogorov-Smirnov (KS) statistic value. One new metric is score consistency, which is defined as the ability to provide near-identical risk assessment of a consumer across multiple credit reporting agencies. In other words, this means having confidence that when a consumer gets a 700 from one agency, he or she is likely to get a 700 from another agency. The other metric that risk managers referenced was stability, which is defined as the ability of a model to retain its predictive accuracy across an extended time frame. Learn more about the VantageScore credit score® Source: VantageScore newsletter, April 2011 VantageScore® is owned by VantageScore Solutions, LLC

Published: March 26, 2012 by Guest Contributor

By: Joel Pruis Some of you may be thinking finally we get to the meat of the matter.  Yes the decision strategies are extremely important when we talk about small business/business banking.  Just remember how we got to here though, we had to first define: Who are we going to pursue in this market segment? How are we going to pursue this market segment - part 1 &  part 2? What are we going to require of the applicants to request the funds? Without the above, we can create all the decision strategies we want but their ultimate effectiveness will be severely limited as they will not have a foundation based upon a successful execution. First we are going to lay the foundation for how we are going to create the decision strategy.  The next blog post (yes, there is one more!) will get into some more specifics.  With that said, it is still important that we go through the basics of establishing the decision strategy. These are not the same as investments. Decision strategies based upon scorecards We will not post the same disclosure as do the financial reporting of public corporations or investment solicitations.  This is the standard disclosure of “past performance is not an indication of future results”.  On the contrary, for scorecards, past performance is an indication of future results.  Scorecards are saying that if all conditions remain the same, future results should follow past performance.  This is the key. We need to fully understand what the expected results are to be for the portfolio originated using the scorecard.  Therefore we need to understand the population of applications used to develop the scorecards, basically the information that we had available to generate the scorecard.  This will tie directly with the information that we required of the applications to be submitted. As we understand the type of applications that we are taking from our client base we can start to understand some expected results. By analyzing what we have processed in the past we can start to build about model for the expected results going forward. Learn from the past and try not to repeat the mistakes we made. First we take a look at what we did approve and analyze the resulting performance of the portfolio. It is important to remember that we are not to be looking for the ultimate crystal ball rather a model that can work well to predict performance over the next 12 to 18 months. Those delinquencies and losses that take place 24, 36, 48 months later should not and cannot be tied back to the information that was available at the time we originated the credit. We will talk about how to refresh the score and risk assessment in a later blog on portfolio management. As we see what was approved and demonstrated acceptable performance we can now look back at those applications we processed and see if any applications that fit the acceptable profile were actually declined. If so, what were the reasons for the declinations?  Do these reasons conflict with our findings based upon portfolio performance? If so, we may have found some additional volume of acceptable loans. I say "may" because statistics by themselves do not tell the whole story, so be cautious of blindly following the statistical data. My statistics professor in college drilled into us the principle of "correlation does not mean causation".  Remember that the next time a study featured on the news.  The correlation may be interesting but it does not necessarily mean that those factors "caused" the result.  Just as important, challenge the results but don't use outliers to disprove here results or the effectiveness of the models. Once we have created the model and applied it to our typical application population we can now come up with some key metrics that we need to manage our decision strategies:     Expected score distributions of the applications     Expected approval percentage     Expected override percentage     Expected performance over the next 12-18 months Expected score distributions We build the models based upon what we expect to be the population of applications we process going forward. While we may target market certain segments we cannot control the walk-in traffic, the referral volume or the businesses that will ultimately respond to our marketing efforts. Therefore we consider the normal application distribution and its characteristics such as 1) score; 2) industry; 3) length of time in business; 4) sales size; etc.  The importance of understanding and measuring the application/score distributions is demonstrated in the next few items. Expected approval percentages First we need to consider the approval percentages as an indication of what percent of the business market to which we are extending credit. Assuming we have a good representative sample of the business population in the applications we are processing we need to determine what percentile of businesses will be our targeted market. Did our analysis show that we can accept the top 40%? 50%?  Whatever the percentage, it is important that we continue to monitor our approval percentage to determine if we are starting to get too conservative or too liberal in our decisioning. I typically counsel my client that “just because your approval percentage is going up is not necessarily an improvement!”  By itself an increase in approval percentage is not good.  I'm not saying that it is bad just that when it goes up (or down!) you need to explain why. Was there a targeted marketing effort?  Did you run into a short term lucky streak? OR is it time to reassess the decision model and tighten up a bit? Think about what happens in an economic expansion. More businesses are surviving (note I said surviving not succeeding). Are more businesses meeting your minimum criteria?  Has the overall population shifted up?  If more businesses are qualifying but there has been no change in the industries targeted, we may need to increase our thresholds to maintain our targeted 50% of the market. Just because they met the standard criteria in the expansion does not mean they will survive in a recession. "But Joel, the recession might be more than 18 months away so we have a good client for at least 18 months, don't we?". I agree but we have to remember that we built the model assuming all things remain constant. Therefore if we are confident that the expansion will continue at the same pace infinitum, then go ahead and live with the increased approval percentage.  I will challenge you that it is those applicants that "squeaked by" during the expansion that will be the largest portion of the losses when the recession comes. I will also look to investigate the approval percentages when they go down.  Yes you can make the same claim that the scorecard is saying that the risk is too great over the next 12-18 months but again I will challenge that if we continue to provide credit to the top 40-50% of all businesses we are likely doing business with those clients that will survive and succeed when the expansion returns.  Again, do the analysis of “why” the approval percentage declined/dropped. Expected override percentage While the approval percentage may fluctuate or stay the same, another area to be reviewed is that of the override.  Overrides can be score overrides or a decision override.  Score override would be contradicting the decision that was recommended based upon the score and/or overall decision strategy.  Decision override would be when the market/field has approval authority and overturns the decision made by the central underwriting group.  Consequently you can have a score override, a decision override or both.  Overrides can be an explanation for the change in approval percentages.  While we anticipate a certain degree of overrides (say around 5%), should the overrides become too significant we start to lose control of the expected outcomes of the portfolio performance.  As such we need to determine why the overrides have increase (or potentially decrease) and the overrides impact on the approval percentage.  We will address some specifics around override management in a later blog.  Suffice to say, overrides will always be present but we need to keep the amount of overrides within tolerances to be sure we can accurate assess future performance. Expected performance over next 12-18 months The measure of expected performance is at minimum the expected probability/propensity of repayment.  This may be labeled as the bad rate or the probability of default (PD).  In a nutshell it is the probability that the credit facility will be a certain level of delinquency over the next 12-18 months.  What the base level expected performance based upon score is not the expected “loss” on the account.  That is a combination of the probability of default combined with the expected loss given event of default. For the purpose of this post we are talking about the probability of default and not the loss given event of default.  For reinforcement we are simply talking about the percentage of accounts that go 30 or 60 or 90 days past due during the 12 – 18 months after origination. So bottom line, if we maintain a score distribution of the applications processed by the financial institution, maintain the approval percentage as well as the override percentages we should be able to accurately assess the future performance of the newly originated portfolio. Coming up next… A more tactical discussion of the decision strategy  

Published: March 23, 2012 by Guest Contributor

Lenders continued to increase their appetite for risk in Q2 2011, with new vehicle loans for customers with credit outside of prime increasing by 22.4 percent compared with the previous year. In Q2 2011, 22.29 percent of all new vehicle loans went to customers in the nonprime, subprime and deep-subprime categories, increasing from 18.21 percent in Q2 2010. The largest percentage increase in new car loans was in the category with the highest risk: deep subprime, which jumped 44.1 percent, moving from 1.48 percent of all new vehicle loans in Q2 2010 to 2.13 percent in Q2 2011. For more information on Experian Automotive's AutoCount® Risk Report, visit www.autocount.com Source: Automotive quarterly credit trends

Published: March 23, 2012 by Guest Contributor

Experian® QAS®, a leading provider of address verification software and services, recently released a new benchmark report on the data quality practices of top online retailers. The report revealed that 72 percent of the top 100 retailers are using some form of address verification during online checkout. This third annual benchmark report enables retailers to compare their online verification practices to those of industry leaders and provides tips for accurately capturing email addresses, a continuously growing data point for retailers. To find out how online retailers are utilizing contact data verification, download the complimentary report 2012 Address Verification Benchmark Report: The Top 100 Online Retailers. Source: Press release: Experian QAS Study Reveals Prevalence of Real-Time Address Verification Increasing Among Top Online Retailers.

Published: March 21, 2012 by Guest Contributor

A recent Experian credit trends study showcases the types of debts Americans have, the amounts they owe and the differences between generations. Nationally, the average debt in the United States is $78,030 and the average VantageScore® credit score is 751. The debt and VantageScore® credit score distribution for each group is listed below, with the 30 to 46 age group carrying the most debt and the youngest age group (19 to 29) carrying the least: Age group Average debt Average VantageScore® credit score 66 and older $38,043 829 47 to 65 $101,951 782 30 to 46 $111,121 718 19 to 29 $34,765 672   Get your VantageScore® credit score here Source: To view the complete study, please click here VantageScore® is owned by VantageScore Solutions, LLC

Published: March 20, 2012 by Guest Contributor

In Q3 2011, $143 billion – or nearly 44 percent of the $327 billion in new mortgage originations – was generated by VantageScore® A tier consumers. This represents an increase of 35 percent for VantageScore A tier consumers when compared with originations for the quarter before ($106 billion, or 39 percent of total originations). Watch Experian's Webinar for a detailed look at the current state of strategic default in mortgage and an update on consumer credit trends from the Q4 2011 Experian-Oliver Wyman Market Intelligence Reports Source: Experian-Oliver Wyman Market Intelligence Reports. VantageScore® is owned by VantageScore Solutions, LLC.

Published: March 19, 2012 by Guest Contributor

In my last two posts on bankcard and auto originations, I provided evidence as to why lenders have reason to feel optimistic about their growth prospects in 2012.  With real estate lending however, the recovery, or lack thereof looks like it may continue to struggle throughout the year. At first glance, it would appear that the stars have aligned for a real estate turnaround.  Interest rates are at or near all-time lows, housing prices are at post-bubble lows and people are going back to work with the unemployment rate at a 3-year low just above 8%. However, mortgage originations and HELOC limits were at $327B and $20B for Q3 2011, respectively.  Admittedly not all-time quarterly lows, but well off levels of just a couple years ago.  And according to the Mortgage Bankers Association, 65% of the mortgage volume was from refinance activity. So why the lull in real estate originations?  Ironically, the same reasons I just mentioned that should drive a recovery. Low interest rates – That is, for those that qualify.  The most creditworthy, VantageScore® credit score A and B consumers made up nearly 77% of the $327B mortgage volume and 87% of the $20B HELOC volume in Q3 2011.  While continuing to clean up their portfolios, lenders are adjusting their risk exposure accordingly. Housing prices at multi-year lows - According to the S&P Case Shiller index, housing prices were 4% lower at the end of 2011 when compared to the end of 2010 and at the lowest level since the real estate bubble.  Previous to this report, many thought housing prices had stabilized, but the excess inventory of distressed properties continues to drive down prices, keeping potential buyers on the sidelines. Unemployment rate at 3-year low – Sure, 8.3% sounds good now when you consider we were near 10% throughout 2010.  But this is a far cry from the 4-5% rate we experienced just five years ago.   Many consumers continue to struggle, affecting their ability to make good on their debt obligations, including their mortgage (see “Housing prices at multi-year lows” above), in turn affecting their credit status (see “Low interest rates” above)… you get the picture. Ironic or not, the good news is that these forces will be the same ones to drive the turnaround in real estate originations.  Interest rates are projected to remain low for the foreseeable future, foreclosures and distressed inventory will eventually clear out and the unemployment rate is headed in the right direction.  The only missing ingredient to make these variables transform from the hurdle to the growth factor is time.

Published: March 16, 2012 by Alan Ikemura

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe