By: Wendy Greenawalt The economy has changed drastically in the last few years and most organizations have had to reduce costs across their businesses to retain profits. Determining the appropriate cost-cutting measures requires careful consideration of trade-offs while quantifying the short- and long-term organizational priorities. Too often, cost reduction decisions are driven by dynamic market conditions, which mandate quick decision-making. Due to this, decisions are made without a sound understanding of the true impact to organizational objectives. Optimization (optimizing decisions) can be used for virtually any business problem and provides decisions based on complex mathematics. Therefore, whether you are making decisions related to outsourcing versus staffing, internal versus external project development or specific business unit cost savings opportunities, optimization can be applied. While some analytical requirements exist to obtain the highest business metric improvements, most organizations have the data available that is required to take full advantage of optimization technology. If you are using predictive models, credit attributes and have multiple actions that can be taken on an individual consumer, then, most likely, your organization can benefit from strategies in optimizing decisions. In my next few blogs, I will discuss how optimization / optimizing decisions can be used to create better strategies across an organization whether your focus is marketing, risk, customer management or collections.
By: Tom Hannagan While waiting on the compilation of fourth quarter banking industry results, I thought it might be interesting to relate the commercial real estate (CRE) risk management position facing commercial banks from the third quarter. CRE risk is an important consideration in enterprise risk management and for loan pricing and profitability. The slowdown in the global economy has affected CRE credit risk because of increased vacancy rates, halted development projects, and the loss of value affecting commercial properties. As CRE loans come up for renewal, many will find that there have equity deficits and that they are facing tightened credit standards. If a commercial property loan started life at 80 percent loan to value, and the property value has dropped 25 percent, the renewed loan balance will be down at least 25 percent, requiring a substantial net payoff from the borrower. This net cash payoff requirement would be tough to accomplish in good times and all-but-impossible for many borrowers in this economy. After all, the main reason for the decline in property value to begin with is its reduced cash flow performance. Following the third quarter numbers, total U.S. commercial real estate is generally estimated at $3.4 to $3.5 trillion. Commercial banks owned just over half of that debt, or about $1.8 trillion according to Federal Reserve and FDIC sources. The (possibly only) good news with that total is that commercial banks owned a relatively small share of the commercial-mortgage-backed securities (CMBS) slice of CRE exposure. CMBS assets were 21 percent of total CRE credit or $714 billion, but banks owned a total of $54 billion, which represented only 3 percent of total bank CRE assets. Unfortunately, the opposite is true for construction lending. U.S. banks, in total, had $486 to $534 billion (depending on the source) in construction and land loans, representing 27 percent to 30 percent of banks’ total CRE holdings. The true credit risk management picture is much more revealing if we cut the numbers by bank size. According to Deutsche Bank research, the largest 97 banks (those with over $10 billion in total assets) had $14.8 trillion in total assets and $1.0 trillion of the banking industry’s CRE credits. This amounts to about 7 percent of the total assets for this group of larger banks. The 7,500 community banks, with aggregate assets of $2 trillion, had about $786 billion in CRE lending. This amounts to about 28 percent of total assets. That is roughly four times the level of exposure found in the larger banks. The 7 percent level of credit risk average exposure at the large bank group is less than their average level of equity or risk-based capital. For the banks under the $10 billion level, the 28 percent level of CRE exposure is almost three times their average equity position. The riskiest portion of CRE lending is clearly the construction and land development loans. The subtotals in this area confirm where the cumulative risk lies. Again, according to Deutsche Bank research, the largest 97 banks had $299 billion of the banking industry’s $534 billion in construction loans. Although this is 56 percent of total bank construction lending, it amounts to only 2 percent of this group’s total assets. The 7,500 community banks had aggregate construction loans of $235 billion. This amounts to about 8.5 percent of total assets. That is a bit over four times the level of exposure found in the larger banks. The 2 percent level of construction credit risk exposure at the large bank group is one-fourth of their average level of common equity. At banks under the $10 billion level, the 8.5 percent level of CRE exposure, compared to total assets, is about the same as their average equity position. According to Moody’s, bank have already taken about $90 billion in net loan losses in CRE assets through the third quarter of 2009. That means the industry has perhaps another $150 billion in write-offs coming. This would total $240 billion in CRE credit losses for the banking industry due to this economic downturn. That would equate to 13.3 percent of the banking industry’s share of total CRE credit. With the decline in commercial property values ranging from 10 percent to 40 percent, a 13 percent loss is certainly not a worst case scenario. Banks have ramped up their loss reserves, and although the numbers aren’t out yet, we know many banks have used the fourth quarter 2009 to further bolster their allowances for loan and lease losses (ALLL). The larger the ALLL, the safer the risk-based equity account. Risk managers are aware of all of this and banks are very actively developing their strategies to handle the refunding requirements and, at the same time, be in a position to explain to regulators and external auditor how they are protecting shareholders. But the numbers are very daunting and not every bank will have enough net cash flow and risk equity to cover the inevitable losses.
My last entry covered the benefits of consortium databases and industry collaboration in general as a proven and technologically feasible method for combating fraud across industries. They help minimize fraud losses. So – with some notable exceptions – why are so few industries and companies using fraud consortiums and known fraud databases? In my experience, the reasons typically boil down to two things: reluctance to share data and perception of ROI. I say "perception of ROI" because I firmly believe the ROI is there – in fact it grows with the number of consortium participants. First, reluctance to share data seems to stem from a few areas. One is concern for how that data will be used by other consortium members. This is usually addressed through compelling reciprocation of data contribution by all members (the give to get model) as well as strict guidelines for acceptable use. In today’s climate of hypersensitivity, another concern – rightly so – is the stewardship of Personally Identifiable Information (PII). Given the potentially damaging effects of data breaches to consumers and businesses, smart companies are extremely cautious and careful when making decisions about safeguarding consumer information. So how does a data consortium deal with this? Firewalls, access control lists, encryption, and other modern security technologies provide the defenses necessary to facilitate protection of information contributed to the consortium. So, let’s assume we’ve overcome the obstacles to sharing one’s data. The other big hurdle to participation that I come across regularly is the old “what’s in it for me” question. Contributors want to be sure that they get out of it what they put into it. Nobody wants to be the only one, or the largest one, contributing records. In fact, this issue extends to intracompany consortiums as well. No line of business wants to be the sole sponsor just to have other business units come late to the party and reap all the benefits on their dime. Whether within companies or across an industry, it’s obvious that mutual funding, support, equitable operating rules, and clear communication of benefits – to those contributors both big and small – is necessary for fraud consortiums to succeed. To get there, it’s going to take a lot more interest and participation from industry leaders. What would this look like? I think we’d see a large shift in companies’ fraud columns: from “Discovered” to “Attempted”. This shift would save time and money that could be passed back to the legitimate customers. More participation would also enable consortiums to stay on top of changing technology and evolving consumer communication styles, such as email, text, mobile banking, and voice biometrics to name a few.
By: Amanda Roth Last week, we discussed how pricing with competition is important to ensure sound decision practices are being implemented in the domains of loan pricing and profitability. The extreme of pricing too high for the market can obviously be detrimental to your organization. The other extreme can be just as dangerous. Pricing for your profitability, regardless of what the competition is charging in your area, has a few potential issues associated with it regarding management of risk. For example, the statistics state you can charge 5 percent in your “A” tier and still be profitable, but the competition is charging 7.5 percent for the same tier. You may be thinking that by offering 5 percent you will attract the “best of the best” to your organization. However, what your statistics may not be showing you is the risk outside of your applicant base. If you significantly change the customers you are bringing in, does your risk increase as well, ultimately increasing the cost associated with each loan? Increased costs will reduce or even eliminate the profitability you had expected. A second potential issue is setting the expectation within the marketplace. It is often understood with the consumers that when changes occur to the interest rate at the federal level, there will be changes at their local financial institution. These changes are often very small. By undercutting your competition by such an extreme amount, your customers may question any attempts to raise rates more than 50bp, if you do experience increased costs as a result of the earlier situation or any other factors. A safer strategy would be to charge between 6.5 percent and 7 percent, which allows you to obtain some of the best customers, ensure stability within the market, and take advantage of additional profitability while it is available. This is definitely a winning strategy for all -- and an important consideration as you develop your portfolio risk management objectives.
There was a recent discussion among members of the Anti Fraud experts group on LinkedIn regarding collaboration among financial institutions to combat fraud. Most posters agreed on the benefits of such collaboration but were cynical when it came to anything of substance, such as a shared data network, getting off the ground. I happen to agree with some of the opinions on the primary challenges faced in getting cross industry (or even single industry!) cooperation to prevent both consumer and commercial fraud. Those being: 1) sharing data and 2) return on investment. Despite the challenges, there are some fraud prevention and “negative” file consortium databases available in the market as fraud prevention tools. They’re often used in conjunction with authentication products in an overall risk based authentication / fraud deterrence strategy. Some are focused on the Demand Deposit Account (DDA) market, such as Fidelity’s DebitBureau, while others, like Experian’s own National Fraud Database, address a variety of markets. Early Warning Services has a database of both “account abuse” – aka DDA financial mismanagement – and fraud records. Still others like Ethoca and the UK’s 192.com seem focused on merchant data and online retailers. Regardless of the consortium, they share some common traits. Most: - fall under Fair Credit Reporting Act regulation - are used in the acquisition phase as part of the new account decision - require contribution of data to access the shared data network Given the seemingly general reluctance to participate in fraud consortiums, as evidenced by the group described above, how do we assess value in these consortium databases? Well, for one, most U.S. banks and credit unions participate in and contribute customer behavior data to a consortium. Safe to say, then, that the banking industry has recognized the value of collaboration and sharing data with each other – if not exclusively to minimize fraud losses but at least to manage potential risk at acquisition. I’m speaking here of the DDA financial mismanagement data used under the guiding principle of “past performance predicts future results”. Consortium data that includes confirmed fraud records make the value of collaboration even more clear: a match to one of these records compels further investigation and a more cautious review of the transaction or decision. With this much to gain, why aren’t more companies and industries rushing to join or form a consortium? In my next post, I’ll explore the common objections to joining consortiums and what the future may look like.
As the economic environment changes on what feels like a daily basis, the importance of having information about consumer credit trends and the future direction of credit becomes invaluable for planning and achieving strategic goals. I recently had the opportunity to speak with members of the collections industry about collections strategy and collections change management -- and discussed the use of business intelligence data in their industry. I was surprised at how little analysis was conducted in terms of anticipating strategic changes in economic and credit factors that impact the collections business. Mostly, it seems like anecdotal information and media coverage is used to get ‘a feeling’ for the direction of the economy and thus the collections industry. Clearly, there are opportunities to understand these high-level changes in more detail and as a result, I wanted to review some business intelligence capabilities that Experian offers – and to expand on the opportunities I think exist to for collections firms to leverage data and better inform their decisions: * Experian possesses the ability to capture the entire consumer credit perspective, allowing collections firms to understand trends that consider all consumer relationships. * Within each loan type, insights are available by analyzing loan characteristics such as, number of trades, balances, revolving credit limits, trade ages, and delinquency trends. These metrics can help define market sizes, relative delinquency levels and identify segments where accounts are curing faster or more slowly, impacting collectability. * Layering in geographic detail can reveal more granular segment trends, creating segments for both macro and regional-level credit characteristics. * Experian Business Intelligence has visibility to the type of financial institution, allowing for a market by market view of credit patterns and trends. * Risk profiling by VantageScore can shed light on credit score trends, breaking down larger segments into smaller score-based segments and identifying pockets of opportunity and risk. I’ll continue to consider the opportunities for collections firms to leverage business intelligence data in subsequent blogs, where I’ll also discuss the value of credit forecasting to the collections industry.
By: Amanda Roth Doesn’t that sound strange: Pricing WITH competition? We are familiar with the sayings of pricing for competition and pricing to be competitive, but did you ever think you would need to price with competition? When developing a risk-based pricing program, it is important to make sure you do not price against the competition in any extreme. Some clients decide they want to price lower than the competition regardless of how it impacts their profitability. However, others price only for profitability without any respect to their competition. As we discussed last week, risk-based pricing is 80 percent statistics, but 20 percent art -- and competition is part of the artistic portion. Once you complete your profitability analysis (refer to 12/28/2009 posting), you will often need to massage the final interest rate to be applied to loan applications. If the results of the analysis are that your interest rate needs to be 8.0 percent in your “A” tier to guarantee profitability, but your competition is only charging 6.0 percent, there could be a problem if you go to market with that pricing strategy. You will probably experience most of your application volume coming to an end, especially those customers with low risk that can obtain the best rates of a lender. Creativity is the approach you must take to become more competitive while still maintaining profitability. It may be an approach of offering the 6.0 percent rate to the best 10 percent of your applicant base only, while charging slightly higher rates in your “D” and “E” tiers. Another option may be that you need to look internally at processing efficiencies to determine if there is a way to decrease the overall cost associated with the decision process. Are there decision strategies in place that are creating a manual decision when more could be automated? Pricing higher than the market rate can be detrimental to any organization, therefore it is imperative to apply an artistic approach while maintaining the integrity of the statistical analysis. Join us next week to continue this topic of pricing with competition which is, again, an important consideration when developing a risk-based pricing program.
By: Ken Pruett I thought it might be helpful to give an example of a recent performance monitoring engagement to show just how the performance monitoring process can help. The organization to which I'm referring has been using Knowledge Based Authentication for several years. They are issuing retail credit cards for their online channel. This is an area that usually experiences a higher rate of fraud. The Knowledge Based Authentication product is used prior to credit being issued. The performance monitoring process involved the organization providing us with a sample of approximately 120,000 records of which some were good and some were bad. Analysis showed that they had a 25 percent referral rate -- but they were concerned about the number of frauds they were catching. They felt that too many frauds were getting through; they believed the fraud process was probably too lenient. Based on their input, we started a detailed analytic exercise with the intention, of course, to minimize fraud losses. Our study found that, by changing several criteria items with the set-up, the organization was able to get the tool to be more in-line with expectations. So, by lowering the pass rate by only 9 percent they increased their fraud find rate by 27 percent. This was much more in-line with their goals for this process. In this situation, a score was being used, in combination with the organization's customer's ability to answer questions, to determine the overall accept or refer decision. The change to the current set-up involved requiring customers to answer at least one more question in combination with certain scores. Although the change was minor in nature, it yielded fairly significant results. Our next step in the engagement involved looking at the questions. Analysis showed that some questions should be eliminated due to poor performance. They were not really separating fraud; so, removing them would be beneficial to the overall process. We also determined that some questions performed very well. We recommended that these questions should carry a higher weight in the overall decision process. An example would be that a customer be required to answer only two questions correct for the higher weighted questions versus three of the lesser performing questions. The key here is to help keep pass rates up while still preventing fraud. Striking this delicate balance is the key objective. As you can see from this example, this is an ongoing process, but the value in that process is definitely worth the time and effort.
We've recently discussed management of risk, collections strategy, credit attributes, and the like for the bank card, telco, and real estate markets. This blog will provide insights into the trends of the automotive finance market as of third quarter 2009. In terms of credit quality, the market has been relatively steady in year-over-year comparisons. The subprime group saw the biggest change in risk distribution from 3Q08, with a -3.74 percent shift. Overall, balances have declined to just over $673 billion (- 4 percent). In 3Q09, banks held the largest total of outstanding automotive balances of $241 billion (with captive auto next at $203 billion). Credit unions had the largest increase from 3Q08 (with $5 billion) and the finance/other group had the largest decrease in balances (- $23 billion). How are automotive loans performing? Total 30- and 60-day delinquencies are still on the rise, but the rate of increase of 30-day delinquencies appears to be slowing. New originations are dominating in the Prime plus market (66 percent), up by 10 percent. Lending criteria has tightened and, as a result, we see scores on both new and used vehicles continue to increase. For new buyers, over 83 percent are Prime plus. For used buyers, over 53 percent are Prime plus. The average credit score changed from 762 in 3Q08 to 775 in 3Q09 -- up 13 points for new vehicles. For used vehicles in the same time period: 670 to 684, up 14 points. Lastly, let’s take a look at how financing has changed from 3Q08 to 3Q09. The financed amounts and monthly payments have dropped year-over-year as well as the average term and average rate. Source: State of the Automotive Finance Market, Third Quarter 2009 by Melinda Zabritski, director of Automotive Credit at Experian and Experian-Oliver Wyman Market Intelligence Reports
By: Tom Hannagan Apparently my last post on the role of risk management in the pricing of deposit services hit some nerve ends. That’s good. The industry needs its “nerve ends” tweaked after the dearth of effective risk management that contributed to the financial malaise of the last couple of years. Banks, or any business, can prosper by simply following their competitors’ marketing strategies and meeting or slightly undercutting their prices. The actions of competitors are an important piece of intelligence to consider, but not necessarily optimal for your bank to copy. One question is regarding the “how-to” behind risk-based pricing (RBP) of deposits. The answer has four parts. Let’s see. First, because of the importance and size of the deposit business (yes, it’s a line of business) as a funding source, one needs to isolate the interest rate risk. This is done by transfer pricing, or in a sense, crediting the deposit balances for their marginal value as an offset to borrowing funds. This transfer price has nothing to do with the earnings credit rate used in account analysis – that is a merchandising issue used to generate fee income. Fees, resulting from account analysis, when not waived, affect the profitability of deposit services, but are not a risk element. Two things are critical to the transfer of funding credit: 1) the assumptions regarding the duration, or reliability of the deposit balances and 2) the rate curve used to match the duration. Different types of deposit behave differently based on changes in rates paid. Checking account deposit funds tend to be very loyal or “sticky” - they don’t move around a lot (or easily) because of rate paid, if any. At the other extreme, time deposits tend to be very rate-sensitive and can move (in or out) for small incremental gains. Savings, money market and NOW accounts are in-between. Since deposits are an offset (ultimately) to marginal borrowing, just as loans might (ultimately) require marginal borrowing, we recommend using the same rate curve for both asset and liability transfer pricing. The money is the same thing on both sides of the balance sheet and the rate curve used to fund a loan or credit a deposit should be the same. We believe this will help, greatly, to isolate IRR. It is also seems more fair when explaining the concept to line management. Secondly, although there is essentially no credit risk associated with deposits, there is operational risk. Deposit make up most of the liability side of the balance sheet and therefore the lion’s share of institutional funding. Deposits are also a major source of operational expense. The mitigated operational risks such as physical security, backup processing arrangements, various kinds of insurance and catastrophe plans, are normal expenses of doing business and included in a bank’s financial statements. The costs need to be broken down by deposit category to get a picture of the risk-adjusted operating expenses. The third major consideration for analyzing risk-adjusted deposit profitability is its revenue contribution. Deposit-related fee income can be a very significant number and needs to be allocated to particular deposit category that generates this income. This is an important aspect of the return, along with the risk-adjusted funding value of the balances. It will vary substantially for various deposit types. Time deposits have essentially zero fee income, whereas checking accounts can produce significant revenues. The fourth major consideration is capital. There are unexpected losses associated with deposits that must be covered by risk-based capital – or equity. The unexpected losses include: unmitigated operational risks, any error in transfer pricing the market risk, and business or strategic risk. Although the unexpected losses associated with deposit products are substantially less than found in the lending products, they needs to be taken into account to have a fully risk-adjusted view. It is also necessary to be able to compare the risk-adjusted profit and profitability of such diverse services as found within banking. Enterprise risk management needs to consider all of the lines of business, and all of the products of the organization, on a risk-adjusted performance basis. Otherwise it is impossible to decide on the allocation of resources, including precious capital. Without this risk management view of deposits (just as with loans) it is impossible to price the services in a completely knowledgeable fashion. Good entity governance, asset and liability posturing, and competent line of business management, all require more and better risk-based profit considerations to be an important part of the intelligence used to optimally price deposits.
Meat and potatoes Data are the meat and potatoes of fraud detection. You can have the brightest and most capable statistical modeling team in the world. But if they have crappy data, they will build crappy models. Fraud prevention models, predictive scores, and decisioning strategies in general are only as good as the data upon which they are built. How do you measure data performance? If a key part of my fraud risk strategy deals with the ability to match a name with an address, for example, then I am going to be interested in overall coverage and match rate statistics. I will want to know basic metrics like how many records I have in my database with name and address populated. And how many addresses do I typically have for consumers? Just one, or many? I will want to know how often, on average, we are able to match a name with an address. It doesn’t do much good to tell you your name and address don’t match when, in reality, they do. With any fraud product, I will definitely want to know how often we can locate the consumer in the first place. If you send me a name, address, and social security number, what is the likelihood that I will be able to find that particular consumer in my database? This process of finding a consumer based on certain input data (such as name and address) is called pinning. If you have incomplete or stale data, your pin rate will undoubtedly suffer. And my fraud tool isn’t much good if I don’t recognize many of the people you are sending me. Data need to be fresh. Old and out-of-date information will hurt your strategies, often punishing good consumers. Let’s say I moved one year ago, but your address data are two-years old, what are the chances that you are going to be able to match my name and address? Stale data are yucky. Quality Data = WIN It is all too easy to focus on the more sexy aspects of fraud detection (such as predictive scoring, out of wallet questions, red flag rules, etc.) while ignoring the foundation upon which all of these strategies are built.
In a continuation of my previous entry, I’d like to take the concept of the first-mover and specifically discuss the relevance of this to the current bank card market. Here are some statistics to set the stage: • Q2 2009 bankcard origination levels are now at 54 percent of Q2 2008 levels • In Q2 2009, bankcard originations for subprime and deep-subprime were down 63 percent from Q2 2008 • New average limits for bank cards are down 19 percent in Q2 2009 from peak in Q3 2008 • Total unused limits continued to decline in Q3 2009, decreasing by $100 billion in Q3 2009 Clearly, the bank card market is experiencing a decline in credit supply, along with deterioration of credit performance and problematic delinquency trends, and yet in order to grow, lenders are currently determining the timing and manner in which to increase their presence in this market. In the following points, I’ll review just a few of the opportunities and risks inherent in each area that could dictate how this occurs. Lender chooses to be a first-mover: • Mining for gold – lenders currently have an opportunity to identify long-term profitable segments within larger segments of underserved consumers. Credit score trends show a number of lower-risk consumers falling to lower score tiers, and within this segment, there will be consumers who represent highly profitable relationships. Early movers have the opportunity to access these consumers with unrealized creditworthiness at their most receptive moment, and thus have the ability to achieve extraordinary profits in underserved segments. • Low acquisition costs – The lack of new credit flowing into the market would indicate a lack of competitiveness in the bank card acquisitions space. As such, a first-mover would likely incur lower acquisitions costs as consumers have fewer options and alternatives to consider. • Adverse selection - Given the high utilization rates of many consumers, lenders could face an abnormally high adverse selection issue, where a large number of the most risky consumers are likely to accept offers to access much needed credit – creating risk management issues. • Consumer loyalty – Whether through switching costs or loyalty incentives, first-movers have an opportunity to achieve retention benefits from the development of new client relationships in a vacant competitive space. Lender chooses to be a secondary or late-mover: • Reduced risk by allowing first-mover to experience growing pains before entry. The implementation of new acquisitions and risk-based pricing management techniques with new bank card legislation will not be perfected immediately. Second-movers will be able to read and react to the responses to first movers’ strategies (measuring delinquency levels in new subprime segments) and refine their pricing and policy approaches. • One of the most common first-mover advantages is the presence of switching costs by the customer. With minimal switching costs in place in the bank card industry, the ability for second-movers to deal with an incumbent is not one where switching costs are significant issues – second-movers would be able to steal market share with relative ease. • Cherry-picked opportunities – as noted above, many previously attractive consumers will have been engaged by the first-mover, challenging the second-mover to find remaining attractive segments within the market. For instance, economic deterioration has resulted in short-term joblessness for some consumers who might be strong credit risks, given the return of capacity to repay. Once these consumers are mined by the first-mover, the second-mover will likely incur greater costs to acquire these clients. Whether lenders choose to be first to market, or follow as a second-mover, there are profitable opportunities and risk management challenges associated with each strategy. Academics and bloggers continue to debate the merits of each, (1) but it is the ultimately lenders of today that will provide the proof. [1] http://www.fastcompany.com/magazine/38/cdu.html
By: Ken Pruett The use of Knowledge Based Authentication (KBA) or out of wallet questions continues to grow. For many companies, this solution is used as one of its primary means for fraud prevention. The selection of the proper tool often involves a fairly significant due diligence process to evaluate various offerings before choosing the right partner and solution. They just want to make sure they make the right choice. I am often surprised that a large percentage of customers just turn these tools on and never evaluate or even validate ongoing performance. The use of performance monitoring is a way to make sure you are getting the most out of the product you are using for fraud prevention. This exercise is really designed to take an analytical look at what you are doing today when it comes to Knowledge Based Authentication. There are a variety of benefits that most customers experience after undergoing this fraud analytics exercise. The first is just to validate that the tool is working properly. Some questions to ponder include: Are enough frauds being identified? Is the manual review rate in-line with what was expected? In almost every case I have worked on as it relates to these engagements, there were areas that were not in-line with what the customer was hoping to achieve. Many had no idea that they were not getting the expected results. Taking this one step further, changes can also be made to improve upon what is already in place. For example, you can evaluate how well each question is performing. The analysis can show you which questions are doing the best job at predicting fraud. The use of better performing questions can allow you the ability to find more fraud while referring fewer applications for manual review. This is a great way to optimize how you use the tool. In most organizations there is increased pressure to make sure that every dollar spent is bringing value to the organization. Performance monitoring is a great way to show the value that your KBA tool is bringing to the organization. The exercise can also be used to show how you are proactively managing your fraud prevention process. You accomplish this by showing how well you are optimizing how you use the tool today while addressing emerging fraud trends. The key message is to continuously measure the performance of the KBA tool you are using. An exercise like performance monitoring could provide you with great insight on a quarterly basis. This will allow you to get the most out of your product and help you keep up with a variety of emerging fraud trends. Doing nothing is really not an option in today’s even changing environment.
By: Amanda Roth The reality of risk-based pricing is that there is not one “end all be all” way of determining what pricing should be applied to your applicants. The truth is that statistics will only get you so far. It may get you 80 percent of the final answer, but to whom is 80 percent acceptable? The other 20 percent must also be addressed. I am specifically referring to those factors that are outside of your control. For example, does your competition’s pricing impact your ability to price loans? Have you thought about how loyal customer discounts or incentives may contribute to the success or demise of your program? Do you have a sensitive population that may have a significant reaction to any risk-base pricing changes? These questions must be addressed for sound pricing and risk management. Over the next few weeks, we will look at each of these questions in more detail along with tips on how to apply them in your organization. As the new year is often a time of reflection and change, I would encourage you to let me know what experiences you may be having in your own programs. I would love to include your thoughts and ideas in this blog.
To calculate the expected business benefits of making an improvement to your decisioning strategies, you must first identify and prioritize the key metrics you are trying to positively impact. For example, if one of your key business objectives is improved enterprise risk management, then some of the key metrics you seek to impact, in order to effectively address changes in credit score trends, could include reducing net credit losses through improved credit risk modeling and scorecard monitoring. Assessing credit risk is a key element of enterprise risk management and can addressed as part of your application risk management processes as well as other decisioning strategies that are applied at different points in the customer lifecycle. In working with our clients, Experian has identified 15 key metrics that can be positively impacted through optimizing decisions. As you review the list of metrics below, you should identify those metrics that are most important to your organization. • Approval rates • Booking or activation rates • Revenue • Customer net present value • 30/60/90-day delinquencies • Average charge-off amount • Average recovery amount • Manual review rates • Annual application volume • Charge-offs (bad debt & fraud) • Avg. cost per dollar collected • Average amount collected • Annual recoveries • Regulatory compliance • Churn or attrition Based on Experian’s extensive experience working with clients around the world to achieve positive business results through optimizing decisions, you can expect between a 10 percent and 15 percent improvement in any of these metrics through the improved use of data, analytics and decision management software. The initial high-level business benefit calculation, therefore, is quite important and straightforward. As an example, assume your current approval rate for vehicle loans is 65 percent, the average value of an approved application is $200 and your volume is 75,000 applications per year. Keeping all else equal, a 10 percent improvement in your approval rates (from 65 percent to 72 percent) would generate $10.7 million in incremental business value each year ($200 x 75,000 x .65 x 1.1). To prioritize your business improvement efforts, you’ll want to calculate expected business benefits across a number of key metrics and then focus on those that will deliver the greatest value to your organization.