Apply DA Tag

Recently, the Commerce Department reported that consumer spending levels continued to rise in February, increasing for the fifth straight month *, while flat income levels drove savings levels lower. At the same time, media outlets such as Fox Businesses, reported that the consumer “shopping cart” ** showed price increases for the fourth straight month. Somewhat in opposition to this market trend, the Q4 2009 Experian-Oliver Wyman Market Intelligence Reports reveal that the average level of credit card debt per consumer decreased overall, but showed increases in only one score band. In the Q4 reports, the score band that demonstrated balance increases was VantageScore® credit score A – the super prime consumer - whose average balance went up $30 to $1,739. In this time of economic challenge and pressure on household incomes, it’s interesting to see that the lower credit scoring consumers display the characteristics of improved credit management and deleveraging; while at the same time, consumers with credit scores in the low-risk tiers may be showing signs of increased expenses and deteriorated savings. Recent delinquency trends support that low-risk consumers are deteriorating in performance for some product vintages. Even more interestingly, Chris Low, Chief Economist at FTN Financial in New York was quoted as saying "I guess the big takeaway is that consumers are comfortably consuming again. We have positive numbers five months in a row since October, which I guess is a good sign,". I suggest that there needs to be more analysis applied within the details of these figures to determine whether consumers really are ‘comfortable’ with their spending, or whether this is just a broad assumption that is masking the uncomfortable realities that lie within.

By: Ken Pruett I want to touch a bit on some of the third party fraud scenarios that are often top of mind with our customers: identity theft; synthetic identities; and account takeover. Identity Theft Identity theft usually occurs during the acquisition stage of the customer life cycle. Simply put, identity theft is the use of stolen identity information to fraudulently open up a new account. These accounts do not have to be just credit card related. For example, there are instances of people using others identities to open up wireless phone and utilities accounts Recent fraud trends show this type of fraud is on the rise again after a decrease over the past several years. A recent Experian study found that people who have better credit scores are more likely to have their identity stolen than those with very poor credit scores. It does seem logical that fraudsters would likely opt to steal an identity from someone with higher credit limits and available purchasing power. This type of fraud gets the majority of media attention because it is the consumer who is often the victim (as opposed to a major corporation). Fraud changes over time and recent findings show that looking at data from a historical perspective is a good way to help prevent identity theft. For example, if you see a phone number being used by multiple parties, this could be an indicator of a fraud ring in action. Using these types of data elements can make your fraud models much more predictive and reduce your fraud referral rates. Synthetic Identities Synthetic Identities are another acquisition fraud problem. It is similar to identity theft, but the information used is fictitious in nature. The fraud perpetrator may be taking pieces of information from a variety of parties to create a new identity. Trade lines may be purchased from companies who act as middle men between good consumers with good credit and perpetrators who creating new identities. This strategy allows the fraud perpetrator to quickly create a fictitious identity that looks like a real person with an active and good credit history. Most of the trade lines will be for authorized users only. The perpetrator opens up a variety of accounts in a short period of time using the trade lines. When creditors try to collect, they can’t find the account owners because they never existed. As Heather Grover mentioned in her blog, this fraud has leveled off in some areas and even decreased in others, but is probably still worth keeping an eye on. One concern on which to focus especially is that these identities are sometimes used for bust out fraud. The best approach to predicting this type of fraud is using strong fraud models that incorporate a variety of non-credit and credit variables in the model development process. These models look beyond the basic validation and verification of identity elements (such as name, address, and social security number), by leveraging additional attributes associated with a holistic identity -- such as inconsistent use of those identity elements. Account Takeover Another type of fraud that occurs during the account management period of the customer life cycle is account takeover fraud. This type of fraud occurs when an individual uses a variety of methods to take over an account of another individual. This may be accomplished by changing online passwords, changing an address or even adding themselves as an authorized user to a credit card. Some customers have tools in place to try to prevent this, but social networking sites are making it easier to obtain personal information for many consumers. For example, a person may have been asked to provide the answer to a challenge question such as the name of their high school as a means to properly identify them before gaining access to a banking account. Today, this piece of information is often readily available on social networking sites making it easier for the fraud perpetrators to defeat these types of tools. It may be more useful to use out of wallet, or knowledge-based authentication and challenge tools that dynamically generate questions based on credit or public record data to avoid this type of fraud.

By: Wendy Greenawalt In my last few blogs, I have discussed how optimization can be leveraged to make improved decisions across an organization while considering the impact that opimizing decisions have to organizational profits, costs or other business metrics. In this entry, I would like to discuss how optimization is used to improve decisions at the point of acquisition, while minimizing costs. Determining the right account terms at inception is increasingly important due to recent regulatory legislation such as the Credit Card Act. Doing so plays a role in assessing credit risk, relationship managment, and increasing out of wallet share. These regulations have established guidelines specific to consumer age, verification of income, teaser rates and interest rate increases. Complying with these regulations will require changes to existing processes and creation of new toolsets to ensure organizations adhere to the guidelines. These new regulations will not only increase the costs associated with obtaining new customers, but also the long term revenue and value as changes in account terms will have to be carefully considered. The cost of on-boarding and servicing individual accounts continues to escalate while internal resources remain flat. Due to this, organizations of all sizes are looking for ways to improve efficiency and decisions while minimizing costs. Optimizing decisions is an ideal solution to this problem. Optimized strategy trees (trees that optimize decisioning strategies) can be easily implemented into current processes to ensure lending decisions adhere to organizational revenue, growth or cost objectives as well as regulatory requirements. Optimized strategy trees enable organizations to create executable strategies that provide on-going decisions based upon optimization conducted at a consumer level. Optimized strategy trees outperform manually created trees as they are created utilizing sophisticated mathematical analysis and ensure organizational objectives are adhered to. In addition, an organization can quantify the expected ROI of decisioning strategies and provide validation in strategies – before implementation. This type of data is not available without the use of a sophisticated optimization software application. By implementing optimized strategy trees, organizations can minimize the volume of accounts that must be manually reviewed, which results in lower resource costs. In addition, account terms are determined based on organizational priorities leading to increased revenue, retention and profitability.

By: Tom Hannagan An autonomic movement describes an action or response that occurs without conscious control. This, I fear, may be occurring at many banks right now related to their risk-based pricing and profit picture for several reasons. First, the credit risk profile of existing customers is subject to continuous change over time. This was always true to some extent. But, as we’ve seen in the latest economic recession, there can be a sizeable risk level migration if enough stress is applied. It is most obvious in the case of delinquencies and defaults, but is also occurring with customers that have performing loans. The question is: how well are we keeping up with the behind-the-scenes changes risk ratings/score ranges? The changes in relative risk levels of our clients are affecting our risk-based profit picture -- and required capital allocation -- without conscious action on our part. Second, the credit risk profile of collateral categories is also subject to change over time. Again, this is not exactly new news. But, as we’ve seen in the latest real estate meltdown and dynamics affecting the valuation of financial instruments, to name two, there can be huge changes in valuation and loss ratios. And, this occurs without making one new loan. These changes in relative loss-given-default levels are affecting our risk-based expected loss levels, risk-adjusted profit and capital allocation, in a rather autonomic manner. Third, aside from changes in risk profiles of customers and collateral types, the bank’s credit policy may change. The risk management analysis of expected credit losses is continuously (we presume) under examination and refinement by internal credit risk staff. It is certainly getting unprecedented attention by external regulators and auditors. These policy changes need to be reflected in the foundation logic of risk-based pricing and profit models. And that’s just in the world of credit risk. Fourth, there can also be changes in our operating cost structure, including mitigated operational risks, and product volumes that affect the allocation of risk-based non-interest expense to product groups and eventually to clients. Although it isn’t the fault of our clients that our cost structure is changing, for better or worse, we nonetheless expect them to bear the burden of these expenses based on the services we provide to them. Such changes need to be updated in the risk-based profit calculations. Finally, there is the market risk piece of risk management. It is possible if not likely that our ALCO policies have changed due to lessons from the liquidity crisis of 2008 or the other macro economic events of the last two years. Deposit funds may be more highly valued, for instance. There may also be some rotation in assets from lending. Or, the level of reliance on equity capital may have materially changed. In any event, we are experiencing historically low levels for the price of risk-free (treasury rate curve) funding, which affects the required spread and return on all other securities, including our fully-at-risk equity capital. These changes are occurring apart from customer transactions, but definitely affect the risk-based profit picture of each existing loan or deposit account and, therefore, every customer relationship. If any, let alone all, of the above changes are not reflected in our risk-based performance analysis and reporting, and any pricing of new or renewed services to our customers, then I believe we are involved in autonomic changes in risk-based profitability.

There seems to be two viewpoints in the market today about Knowledge Based Authentication (KBA): one positive, one negative. Depending on the corner you choose, you probably view it as either a tool to help reduce identity theft and minimize fraud losses, or a deficiency in the management of risk and the root of all evil. The opinions on both sides are pretty strong, and biases “for” and “against” run pretty deep. One of the biggest challenges in discussing Knowledge Based Authentication as part of an organization’s identity theft prevention program, is the perpetual confusion between dynamic out-of-wallet questions and static “secret” questions. At this point, most people in the industry agree that static secret questions offer little consumer protection. Answers are easily guessed, or easily researched, and if the questions are preference based (like “what is your favorite book?”) there is a good chance the consumer will fail the authentication session because they forgot the answers or the answers changed over time. Dynamic Knowledge Based Authentication, on the other hand, presents questions that were not selected by the consumer. Questions are generated from information known about the consumer – concerning things the true consumer would know and a fraudster most likely wouldn’t know. The questions posed during Knowledge Based Authentication sessions aren’t designed to “trick” anyone but a fraudster, though a best in class product should offer a number of features and options. These may allow for flexible configuration of the product and deployment at multiple points of the consumer life cycle without impacting the consumer experience. The two are as different as night and day. Do those who consider “secret questions” as Knowledge Based Authentication consider the password portion of the user name and password process as KBA, as well? If you want to hold to strict logic and definition, one could argue that a password meets the definition for Knowledge Based Authentication, but common sense and practical use cause us to differentiate it, which is exactly what we should do with secret questions – differentiate them from true KBA. KBA can provide strong authentication or be a part of a multifactor authentication environment without a negative impact on the consumer experience. So, for the record, when we say KBA we mean dynamic, out of wallet questions, the kind that are generated “on the fly” and delivered to a consumer via “pop quiz” in a real-time environment; and we think this kind of KBA does work. As part of a risk management strategy, KBA has a place within the authentication framework as a component of risk- based authentication… and risk-based authentication is what it is really all about.

Meat and potatoes Data are the meat and potatoes of fraud detection. You can have the brightest and most capable statistical modeling team in the world. But if they have crappy data, they will build crappy models. Fraud prevention models, predictive scores, and decisioning strategies in general are only as good as the data upon which they are built. How do you measure data performance? If a key part of my fraud risk strategy deals with the ability to match a name with an address, for example, then I am going to be interested in overall coverage and match rate statistics. I will want to know basic metrics like how many records I have in my database with name and address populated. And how many addresses do I typically have for consumers? Just one, or many? I will want to know how often, on average, we are able to match a name with an address. It doesn’t do much good to tell you your name and address don’t match when, in reality, they do. With any fraud product, I will definitely want to know how often we can locate the consumer in the first place. If you send me a name, address, and social security number, what is the likelihood that I will be able to find that particular consumer in my database? This process of finding a consumer based on certain input data (such as name and address) is called pinning. If you have incomplete or stale data, your pin rate will undoubtedly suffer. And my fraud tool isn’t much good if I don’t recognize many of the people you are sending me. Data need to be fresh. Old and out-of-date information will hurt your strategies, often punishing good consumers. Let’s say I moved one year ago, but your address data are two-years old, what are the chances that you are going to be able to match my name and address? Stale data are yucky. Quality Data = WIN It is all too easy to focus on the more sexy aspects of fraud detection (such as predictive scoring, out of wallet questions, red flag rules, etc.) while ignoring the foundation upon which all of these strategies are built.

In a continuation of my previous entry, I’d like to take the concept of the first-mover and specifically discuss the relevance of this to the current bank card market. Here are some statistics to set the stage: • Q2 2009 bankcard origination levels are now at 54 percent of Q2 2008 levels • In Q2 2009, bankcard originations for subprime and deep-subprime were down 63 percent from Q2 2008 • New average limits for bank cards are down 19 percent in Q2 2009 from peak in Q3 2008 • Total unused limits continued to decline in Q3 2009, decreasing by $100 billion in Q3 2009 Clearly, the bank card market is experiencing a decline in credit supply, along with deterioration of credit performance and problematic delinquency trends, and yet in order to grow, lenders are currently determining the timing and manner in which to increase their presence in this market. In the following points, I’ll review just a few of the opportunities and risks inherent in each area that could dictate how this occurs. Lender chooses to be a first-mover: • Mining for gold – lenders currently have an opportunity to identify long-term profitable segments within larger segments of underserved consumers. Credit score trends show a number of lower-risk consumers falling to lower score tiers, and within this segment, there will be consumers who represent highly profitable relationships. Early movers have the opportunity to access these consumers with unrealized creditworthiness at their most receptive moment, and thus have the ability to achieve extraordinary profits in underserved segments. • Low acquisition costs – The lack of new credit flowing into the market would indicate a lack of competitiveness in the bank card acquisitions space. As such, a first-mover would likely incur lower acquisitions costs as consumers have fewer options and alternatives to consider. • Adverse selection - Given the high utilization rates of many consumers, lenders could face an abnormally high adverse selection issue, where a large number of the most risky consumers are likely to accept offers to access much needed credit – creating risk management issues. • Consumer loyalty – Whether through switching costs or loyalty incentives, first-movers have an opportunity to achieve retention benefits from the development of new client relationships in a vacant competitive space. Lender chooses to be a secondary or late-mover: • Reduced risk by allowing first-mover to experience growing pains before entry. The implementation of new acquisitions and risk-based pricing management techniques with new bank card legislation will not be perfected immediately. Second-movers will be able to read and react to the responses to first movers’ strategies (measuring delinquency levels in new subprime segments) and refine their pricing and policy approaches. • One of the most common first-mover advantages is the presence of switching costs by the customer. With minimal switching costs in place in the bank card industry, the ability for second-movers to deal with an incumbent is not one where switching costs are significant issues – second-movers would be able to steal market share with relative ease. • Cherry-picked opportunities – as noted above, many previously attractive consumers will have been engaged by the first-mover, challenging the second-mover to find remaining attractive segments within the market. For instance, economic deterioration has resulted in short-term joblessness for some consumers who might be strong credit risks, given the return of capacity to repay. Once these consumers are mined by the first-mover, the second-mover will likely incur greater costs to acquire these clients. Whether lenders choose to be first to market, or follow as a second-mover, there are profitable opportunities and risk management challenges associated with each strategy. Academics and bloggers continue to debate the merits of each, (1) but it is the ultimately lenders of today that will provide the proof. [1] http://www.fastcompany.com/magazine/38/cdu.html

By: Ken Pruett The use of Knowledge Based Authentication (KBA) or out of wallet questions continues to grow. For many companies, this solution is used as one of its primary means for fraud prevention. The selection of the proper tool often involves a fairly significant due diligence process to evaluate various offerings before choosing the right partner and solution. They just want to make sure they make the right choice. I am often surprised that a large percentage of customers just turn these tools on and never evaluate or even validate ongoing performance. The use of performance monitoring is a way to make sure you are getting the most out of the product you are using for fraud prevention. This exercise is really designed to take an analytical look at what you are doing today when it comes to Knowledge Based Authentication. There are a variety of benefits that most customers experience after undergoing this fraud analytics exercise. The first is just to validate that the tool is working properly. Some questions to ponder include: Are enough frauds being identified? Is the manual review rate in-line with what was expected? In almost every case I have worked on as it relates to these engagements, there were areas that were not in-line with what the customer was hoping to achieve. Many had no idea that they were not getting the expected results. Taking this one step further, changes can also be made to improve upon what is already in place. For example, you can evaluate how well each question is performing. The analysis can show you which questions are doing the best job at predicting fraud. The use of better performing questions can allow you the ability to find more fraud while referring fewer applications for manual review. This is a great way to optimize how you use the tool. In most organizations there is increased pressure to make sure that every dollar spent is bringing value to the organization. Performance monitoring is a great way to show the value that your KBA tool is bringing to the organization. The exercise can also be used to show how you are proactively managing your fraud prevention process. You accomplish this by showing how well you are optimizing how you use the tool today while addressing emerging fraud trends. The key message is to continuously measure the performance of the KBA tool you are using. An exercise like performance monitoring could provide you with great insight on a quarterly basis. This will allow you to get the most out of your product and help you keep up with a variety of emerging fraud trends. Doing nothing is really not an option in today’s even changing environment.

By: Amanda Roth The reality of risk-based pricing is that there is not one “end all be all” way of determining what pricing should be applied to your applicants. The truth is that statistics will only get you so far. It may get you 80 percent of the final answer, but to whom is 80 percent acceptable? The other 20 percent must also be addressed. I am specifically referring to those factors that are outside of your control. For example, does your competition’s pricing impact your ability to price loans? Have you thought about how loyal customer discounts or incentives may contribute to the success or demise of your program? Do you have a sensitive population that may have a significant reaction to any risk-base pricing changes? These questions must be addressed for sound pricing and risk management. Over the next few weeks, we will look at each of these questions in more detail along with tips on how to apply them in your organization. As the new year is often a time of reflection and change, I would encourage you to let me know what experiences you may be having in your own programs. I would love to include your thoughts and ideas in this blog.

I’ve recently been hearing a lot about how bankcard lenders are reacting to changes in legislation, and recent statistics clearly show that lenders have reduced bankcard acquisitions as they retune acquisition and account management strategies for their bankcard portfolios. At this point, there appears to be a wide-scale reset of how lenders approach the market, and one of the main questions that needs to be answered pertains to market-entry timing: Should a lender be the first to re-enter the market in a significant manner, or is it better to wait, and see how things develop before executing new credit strategies? I will dedicate my next two blogs to defining these approaches and discussing them with regard to the current bankcard market. Based on common academic frameworks, today’s lenders have the option of choosing one of the following two routes: becoming a first-mover, or choosing to take the role of a secondary or late mover. Each of these roles possess certain advantages and also corresponding risks that will dictate their strategic choices: The first-mover advantage is defined as “A sometimes insurmountable advantage gained by the first significant company to move into a new market.” (1) Although often confused with being the first-to-market, first-mover advantage is more commonly considered for firms that first substantially enter the market. The belief is that the first mover stands to gain competitive advantages through technology, economies of scale and other avenues that result from this entry strategy. In the case of the bankcard market, current trends suggest that segments of subprime and deep-subprime consumers are currently underserved, and thus I would consider the first lender to target these customers with significant resources to have ‘first-mover’ characteristics. The second-mover to a market can also have certain advantages: the second-mover can review and assess the decisions of the first-mover and develops a strategy to take advantage of opportunities not seized by the first-mover. As well, it can learn from the mistakes of the first-mover and respond, without having to incur the cost of experiential learning and possessing superior market intelligence. So, being a first-mover and second-mover can each have its advantages and pitfalls. In my next contribution, I’ll address these issues as they pertain to lenders considering their loan origination strategies for the bankcard market. (1) http://www.marketingterms.com/dictionary/first_mover_advtanage

In a previous blog, we shared ideas for expanding the “gain” to create a successful ROI to adopt new fraud best practices to improve. In this post, we’ll look more closely at the “cost” side of the ROI equation. The cost of the investment- The costs of fraud analytics and tools that support fraud best practices go beyond the fees charged by the solution provider. While the marketplace is aware of these costs, they often aren’t considered by the solution providers. Achieving consensus on an ROI to move forward with new technology requires both parties to account for these costs. A more robust ROI should these areas: • Labor costs- If a tool increases fraud referral rates, those costs must be taken into account. • Integration costs- Many organizations have strict requirements for recovering integration costs. This can place an additional burden on a successful ROI. • Contractual obligations- As customers look to reduce the cost of other tools, they must be mindful of any obligations to use those tools. • Opportunity costs- Organizations do need to account for the potential impact of their fraud best practices on good customers. Barring a true champion/challenger evaluation, a good way to do this is to remain as neutral as possible with respect to the total number of fraud alerts that are generated using new fraud tools compared to the legacy process As you can see, the challenge of creating a compelling ROI can be much more complicated than the basic equation suggests. It is critical in many industries to begin exploring ways to augment the ROI equation. This will ensure that our industries evolve and thrive without becoming complacent or unable to stay on top of dynamic fraud trends.

By: Heather Grover In my previous entry, I covered how fraud prevention affected the operational side of new DDA account opening. To give a complete picture, we need to consider fraud best practices and their impact on the customer experience. As earlier mentioned, the branch continues to be a highly utilized channel and is the place for “customized service.” In addition, for retail banks that continue to be the consumer's first point of contact, fraud detection is paramount IF we should initiate a relationship with the consumer. Traditional thinking has been that DDA accounts are secured by deposits, so little risk management policy is applied. The reality is that the DDA account can be a fraud portal into the organization’s many products. Bank consolidations and lower application volumes are driving increased competition at the branch – increased demand exists to cross-sell consumers at the point of new account opening. As a result, banks are moving many fraud checks to the front end of the process: know your customer and Red Flag guideline checks are done sooner in the process in a consolidated and streamlined fashion. This is to minimize fraud losses and meet compliance in a single step, so that the process for new account holders are processed as quickly through the system as possible. Another recent trend is the streamlining of a two day batch fraud check process to provide account holders with an immediate and final decision. The casualty of a longer process could be a consumer who walks out of your branch with a checkbook in hand – only to be contacted the next day to tell that his/her account has been shut down. By addressing this process, not only will the customer experience be improved with increased retention, but operational costs will also be reduced. Finally, relying on documentary evidence for ID verification can be viewed by some consumers as being onerous and lengthy. Use of knowledge based authentication can provide more robust authentication while giving assurance of the consumer’s identity. The key is to use a solution that can authenticate “thin file” consumers opening DDA accounts. This means your out of wallet questions need to rely on multiple data sources – not just credit. Interactive questions can give your account holders peace of mind that you are doing everything possible to protect their identity – which builds the customer relationship…and your brand.

By: Heather Grover In past client and industry talks, I’ve discussed the increasing importance of retail branches to the growth strategy of the bank. Branches are the most utilized channel of the bank and they tend to be the primary tool for relationship expansion. Given the face-to-face nature, the branch historically has been viewed to be a relatively low-risk channel needing little (if any) identity verification – there are less uses of robust risk-based authentication or out of wallet questions. However, a now well-established fraud best practice is the process of doing proper identity verification and fraud prevention at the point of DDA account opening. In the current environment of declining credit application volumes and approval across the enterprise, there is an increased focus on organic growth through deposits. Doing proper vetting during DDA account openings helps bring your retail process closer in line with the rest of your organization’s identity theft prevention program. It also provides assurance and confidence that the customer can now be cross-sold and up-sold to other products. A key industry challenge is that many of the current tools used in DDA are less mature than in other areas of the organization. We see few clients in retail that are using advanced fraud analytics or fraud models to minimize fraud – and even fewer clients are using them to automate manual processes - even though more than 90 percent of DDA accounts are opened manually. A relatively simple way to improve your branch operations is to streamline your existing ID verification and fraud prevention tool set: 1. Are you using separate tools to verify identity and minimize fraud? Many providers offer solutions that can do both, which can help minimize the number of steps required to process a new account; 2. Is the solution realtime? To the extent that you can provide your new account holders with an immediate and final decision, the less time and effort you’ll spend after they leave the branch finalizing the decision; 3. Does the solution provide detail data for manual review? This can help save valuable analyst time and provider costs by limiting the need to do additional searches. In my next post, we’ll discuss how fraud prevention in DDA impacts the customer experience.

By: Amanda Roth The final level of validation for your risk-based pricing program is to validate for profitability. Not only will this analysis build on the two previous analyses, but it will factor in the cost of making a loan based on the risk associated with that applicant. Many organizations do not complete this crucial step. Therefore, they may have the applicants grouped together correctly, but still find themselves unprofitable. The premise of risk-based pricing is that we are pricing to cover the cost associated with an applicant. If an applicant has a higher probability of delinquency, we can assume there will be additional collection costs, reporting costs, and servicing costs associated with keeping this applicant in good standing. We must understand what these cost may be, though, before we can price accordingly. Information of this type can be difficult to determine based on the resources available to your organization. If you aren’t able to determine the exact amount of time and costs associated with the different loans at different risk levels, there are industry best practices that can be applied. Of primary importance is to factor in the cost to originate, service and terminate a loan based on varying risk levels. This is the only true way to validate that your pricing program is working to provide profitability to your loan portfolio.

By: Amanda Roth To refine your risk-based pricing another level, it is important to analyze where your tiers are set and determine if they are set appropriately. (We find many of the regulators / examiners are looking for this next level of analysis.) This analysis begins with the results of the scoring model validation. Not only will the distributions from that analysis determine if the score can predict between good and delinquent accounts, but it will also highlight which score ranges have similar delinquency rates, allowing you to group your tiers together appropriately. After all, you do not want to have applicants with a 1 percent chance of delinquency priced the same as someone with an 8 percent chance of delinquency. By reviewing the interval delinquency rates as well as the odds ratios, you should be able to determine where a significant enough difference occurs to warrant different pricing. You will increase the opportunity for portfolio profitability through this analysis, as you are reducing the likelihood that higher risk applicants are receiving lower pricing. As expected, the overall risk management of the portfolio will increase when a proper risk-based pricing program is developed. In my next post we will look the final level of validation which does provide insight into pricing for profitability.