Industries

Loading...

In my previous two blogs, I introduced the definition of strategic default and compared and contrasted the population to other types of consumers with mortgage delinquency.  I also reviewed a few key characteristics that distinguish strategic defaulters as a distinct population. Although I’ve mentioned that segmenting this group is important, I would like to specifically discuss the value of segmentation as it applies to loan modification programs and the selection of candidates for modification. How should loan modification strategies be differentiated based on this population? By definition, strategic defaulters are more likely to take advantage of loan modification programs. They are committed to making the most personally-lucrative financial decisions, so the opportunity to have their loan modified - extending their ‘free’ occupancy – can be highly appealing.  Given the adverse selection issue at play with these consumers, lenders need to design loan modification programs that limit abuse and essentially screen-out strategic defaulters from the population. The objective of lenders when creating loan modification programs should be to identify consumers who show the characteristics of cash-flow managers within our study. These consumers often show similar signs of distress as the strategic defaulters, but differentiate themselves by exhibiting a willingness to pay that the strategic defaulter, by definition, does not. So, how can a lender make this identification? Although these groups share similar characteristics at times, it is recommended that lenders reconsider their loan modification decisioning algorithms, and modify their loan modification offers to screen out strategic defaulters.  In fact, they could even develop programs such as equity-sharing arrangements whereby the strategic defaulter could be persuaded to remain committed to the mortgage.  In the end, strategic defaulters will not self-identify by showing lower credit score trends, by being a bank credit risk, or having previous bankruptcy scores, so lenders must create processes to identify them among their peers. For more detailed analyses, lenders could also extend the Experian-Oliver Wyman study further, and integrate additional attributes such as current LTV, product type, etc. to expand their segment and identify strategic defaulters within their individual portfolios.    

Published: December 14, 2009 by Kelly Kent

--by Andrew Gulledge General configuration issues Question selection- In addition to choosing questions that generally have a high percentage correct and fraud separation, consider any questions that would clearly not be a fit to your consumer population. Don’t get too trigger-happy, however, or you’ll have a spike in your “failure to generate questions” rate. Number of questions- Many people use three or four out-of-wallet questions in a Knowledge Based Authentication session, but some use more or less than that, based on their business needs. In general, more questions will provide a stricter authentication session, but might detract from the customer experience. They may also create longer handling times in a call center environment. Furthermore, it is harder to generate a lot of questions for some consumers, including thin-file types. Fewer Knowledge Based Authentication questions can be less invasive for the consumer, but limits the fraud detection value of the KBA process. Multiple choice- One advantage of this answer format is that it relies on recognition memory rather than recall memory, which is easier for the consumer. Another advantage is that it generally prevents complications associated with minor numerical errors, typos, date formatting errors and text scrubbing requirements. A disadvantage of multiple-choice, however, is that it can make educated guessing (and potentially gaming) easier for fraudsters. Fill in the blank- This is a good fit for some KBA questions, but less so with others. A simple numeric answer works well with fill in the blank (some small variance can be allowed where appropriate), but longer text strings can present complications. While undoubtedly difficult for a fraudster to guess, for example, most consumers would not know the full, official and (correct spelling) of the name to which they pay their monthly auto payment. Numeric fill in the blank questions are also good candidates for KBA in an IVR environment, where consumers can use their phone’s keypad to enter the answers.  

Published: December 14, 2009 by Guest Contributor

A recent New York Times (1) article outlined the latest release of credit borrowing by the Federal Reserve, indicating that American’s borrowed less for the ninth-straight month in October. Nested within the statistics released by the Federal Reserve were metrics around reduced revolving credit demand and comments about how “Americans are borrowing less as they try to replenish depleted investments.” While this may be true, I tend to believe that macro-level statements are not fully explaining the differences between consumer experiences that influence relationship management choices in the current economic environment. To expand on this, I think a closer look at consumers at opposite ends of the credit risk spectrum tells a very interesting story. In fact, recent bank card usage and delinquency data suggests that there are at least a couple of distinct patterns within the overall trend of reducing revolving credit demand: • First, although it is true that overall revolving credit balances are decreasing, this is a macro-level trend that is not consistent with the detail we see at the consumer level. In fact, despite a reduction of open credit card accounts and overall industry balances, at the consumer-level, individual balances are up – that’s to say that although there are fewer cards out there, those that do have them are carrying higher balances. • Secondly, there are significant differences between the most and least-risky consumers when it comes to changes in balances. For instance, consumers who fall into the least-risky VantageScore® tiers, Tier A and B, show only 12 percent and 4 percent year-over-year balance increases in Q3 2009, respectively. Contrast that to the increase in average balance for VantageScore F consumers, who are the most risky, whose average balances increased more than 28 percent for the same time period. So, although the industry-level trend holds true, the challenges facing the “average” consumer in America are not average at all – they are unique and specific to each consumer and continue to illustrate the challenge in assessing consumers' credit card risk in the current credit environment. 1 http://www.nytimes.com/2009/12/08/business/economy/08econ.html  

Published: December 10, 2009 by Kelly Kent

In my last blog, I discussed the presence of strategic defaulters and outlined the definitions used to identify these consumers, as well as other pools of consumers within the mortgage population that are currently showing some measure of mortgage repayment distress. In this section, I will focus on the characteristics of strategic defaulters, drilling deeper into the details behind the population and learning how one might begin to recognize them within that population. What characteristics differentiate strategic defaulters? Early in the mortgage delinquency stage, mortgage defaulters and cash flow managers look quite similar – both are delinquent on their mortgage, but are not going bad on any other trades. Despite their similarities, it is important to segment these groups, since mortgage defaulters are far more likely to charge-off and far less likely to cure than cash flow managers. So, given the need to distinguish between these two segments, here are a few key measures that can be used to define each population. Origination VantageScore® credit score • Despite lower overall default rates, prime and super-prime consumers are more likely to be strategic defaulters  Origination Mortgage Balance • Consumers with higher mortgage balances at origination are more likely to be strategic defaulters, we conclude this is a result of being further underwater on their real estate property than lower-balance consumers Number of Mortgages • Consumers with multiple first mortgages show higher incidence of strategic default.  This trend represents consumers with investment properties making strategic repayment decisions on investments (although the majority of defaults still occur on first mortgages where the consumer has only one first mortgage) Home Equity Line Performance • Strategic defaulters are more likely to remain current on Home Equity Lines until mortgage delinquency occurs, potentially a result of drawing down the HELOC line as much as possible before becoming delinquent on the mortgage Clearly, there are several attributes that identify strategic defaulters and can assist in differentiating them from cash flow managers. The ability to distinguish between these two populations is extremely valuable when considering its usefulness in the application of account management and collections management, improving collections, and loan modification, which is my next topic. Source: Experian-Oliver Wyman Market Intelligence Reports; Understanding strategic default in mortgage topical study/webinar, August 2009.

Published: December 10, 2009 by Kelly Kent

I have already commented on “secret questions” as the root of all evil when considering tools to reduce identity theft and minimize fraud losses.  No, I’m not quite ready to jump off  that soapbox….not just yet, not when we’re deep into the season of holiday deals, steals and fraud.  The answers to secret questions are easily guessed, easily researched, or easily forgotten.  Is this the kind of security you want standing between your account and a fraudster during the busiest shopping time of the year? There is plenty of research demonstrating that fraud rates spike during the holiday season.  There is also plenty of research to demonstrate that fraudsters perpetrate account takeover by changing the pin, address, or e-mail address of an account – activities that could be considered risky behavior in decisioning strategies.  So, what is the best approach to identity theft red flags and fraud account management?  A risk based authentication approach, of course! Knowledge Based Authentication (KBA) provides strong authentication and can be a part of a multifactor authentication environment without a negative impact on the consumer experience, if the purpose is explained to the consumer.  Let’s say a fraudster is trying to change the pin or e-mail address of an account.  When one of these risky behaviors is initiated, a Knowledge Based Authentication session begins. To help minimize fraud, the action is prevented if the KBA session is failed.  Using this same logic, it is possible to apply a risk based authentication approach to overall account management at many points of the lifecycle: • Account funding • Account information change (pin, e-mail, address, etc.) • Transfers or wires • Requests for line/limit increase • Payments • Unusual account activity • Authentication before engaging with a fraud alert representative Depending on the risk management strategy, additional methods may be combined with KBA; such as IVR or out-of-band authentication, and follow-up contact via e-mail, telephone or postal mail.  Of course, all of this ties in with what we would consider to be a comprehensive Red Flag Rules program. Risk based authentication, as part of a fraud account management strategy, is one of the best ways we know to ensure that customers aren’t left singing, “On the first day of Christmas, the fraudster stole from me…”  

Published: December 7, 2009 by Guest Contributor

--by Andrew Gulledge Where does Knowledge Based Authentication fit into my decisioning strategy? Knowledge Based Authentication can fit into various parts of your authentication process. Some folks choose to put every consumer through KBA, while others only send their riskier transactions through the out-of-wallet questions. Some people use Knowledge Based Authentication to feed a manual review process, while others use a KBA failure as a hard-decline. Uses for KBA are as sundry and varied as the questions themselves. Decision Matrix- As discussed by prior bloggers, a well-engineered fraud score can provide considerable lift to any fraud risk strategy. When possible, it is a good idea to combine both score and questions into the decisioning process. This can be done with a matrixed approach—where you are more lenient on the questions if the applicant has a good fraud score, and more lenient on the score if the applicant did well on the questions. In a decision matrix, a set decision code is placed within various cells, based on fraud risk. Decision Overrides- These provide a nice complement to your standard fraud decisioning strategy. Different fraud solution vendors provide different indicators or flags with which decisioning rules can be created. For example, you might decide to fail a consumer who provides a social security number that is recorded as deceased. These rules can help to provide additional lift to the standard decisioning strategy, whether it is in addition to Knowledge Based Authentication questions alone, questions and score, etc. The overrides can be along the lines of both auto-pass and auto-fail.  

Published: December 7, 2009 by Guest Contributor

By: Wendy Greenawalt In my last blog on optimization we discussed how optimized strategies can improve collection strategies. In this blog, I would like to discuss how optimization can bring value to decisions related to mortgage delinquency/modification. Over the last few years mortgage lenders have seen a sharp increase in the number of mortgage account delinquencies and a dramatic change in consumer mortgage payment trends.   Specifically, lenders have seen a shift in consumer willingness from paying their mortgage obligation first, while allowing other debts to go delinquent. This shift in borrower behavior appears unlikely to change anytime soon, and therefore lenders must make smarter account management decisions for mortgage accounts. Adding to this issue, property values continue to decline in many areas and lenders must now identify if a consumer is a strategic defaulter, a candidate for loan modification, or a consumer affected by the economic downturn. Many loans that were modified at the beginning of the mortgage crisis have since become delinquent and have ultimately been foreclosed upon by the lender. Making optimizing decisions related to collection action for mortgage accounts is increasingly complex, but optimization can assist lenders in identifying the ideal consumer collection treatment. This is taking place while lenders considering organizational goals, such as minimizing losses and maximizing internal resources, are retaining the most valuable consumers. Optimizing decisions can assist with these difficult decisions by utilizing a mathematical algorithm that can assess all possible options available and select the ideal consumer decision based on organizational goals and constraints. This technology can be implemented into current optimizing decisioning processes, whether it is in real time or batch processing, and can provide substantial lift in prediction over business as usual techniques.    

Published: December 7, 2009 by Guest Contributor

For the past couple years, the deterioration of the real estate market and the economy as a whole has been widely reported as a national and international crisis. There are several significant events that have contributed to this situation, such as, 401k plans have fallen, homeowners have simply abandoned their now under-valued properties, and the federal government has raced to save the banking and automotive sectors. While the perspective of most is that this is a national decline, this is clearly a situation where the real story is in the details. A closer look reveals that while there are places that have experienced serious real estate and employment issues (California, Florida, Michigan, etc.), there are also areas (Texas) that did not experience the same deterioration in the same manner. Flash forward to November, 2009 – with signs of recovery seemingly beginning to appear on the horizon – there appears to be a great deal of variability between areas that seem poised for recovery and those that are continuing down the slope of decline. Interestingly though, this time the list of usual suspects is changing. In a recent article posted to CNN.com, Julianne Pepitone observes that many cities that were tops in foreclosure a year ago have since shown stabilization, while at the same time, other cities have regressed. A related article outlines a growing list of cities that, not long ago, considered themselves immune from the problems being experienced in other parts of the country. Previous economic success stories are now being identified as economic laggards and experiencing the same pains, but only a year or two later. So – is there a lesson to be taken from this? From a business intelligence perspective, the lesson is generalized reporting information and forecasting capabilities are not going to be successful in managing risk. Risk management and forecasting techniques will need to be developed around specific macro- and micro-economic changes.  They will also need to incorporate a number of economic scenarios to properly reflect the range of possible future outcomes about risk management and risk management solutions. Moving forward, it will be vital to understand the differences in unemployment between Dallas and Houston and between regions that rely on automotive manufacturing and those with hi-tech jobs. These differences will directly impact the performance of lenders’ specific footprints, as this year’s “Best Place to Live” according to Money.CNN.com can quickly become next year’s foreclosure capital. ihttp://money.cnn.com/2009/10/28/real_estate/foreclosures_worst_cities/index.htm?postversion=2009102811 iihttp://money.cnn.com/galleries/2009/real_estate/0910/gallery.foreclosures_worst_cities/2.html  

Published: November 30, 2009 by Kelly Kent

By: Wendy Greenawalt Optimization has become a "buzz word" in the financial services marketplace, but some organizations still fail to realize all the possible business applications for optimization. As credit card lenders scramble to comply with the pending credit card legislation, optimization can be a quick and easily implemented solution that fits into current processes to ensure compliance with the new regulations. Optimizing decisions Specifically, lenders will now be under strict guidelines of when an APR can be changed on an existing account, and the specific circumstances under which the account must return to the original terms. Optimization can easily handle these constraints and identify which accounts should be modified based on historical account information and existing organizational policies. APR account changes can require a great deal of internal resources to implement and monitor for on-going performance. Implementing an optimized strategy tree within an existing account management strategy will allow an organization to easily identify consumer level decisions.  This can be accomplished while monitoring accounts through on-going batch processing. New delivery options are now available for lenders to receive optimized strategies for decisions related to: Account acquisition Customer management Collections Organizations who are not currently utilizing this technology within their  processes should investigate the new delivery options. Recent research suggests optimizing decisions can provide an improvement of 7-to-16 percent over current processes.  

Published: November 30, 2009 by Guest Contributor

--by Jeff Bernstein In the current economic environment, many lenders and issuers across the globe are struggling to manage the volume of caseloads coming into collections. The challenge is that as these new collection cases come into collections in early phases of delinquency, the borrower is already in distress, and the opportunity to have a good outcome is diminished. One of the real “hot” items on the list of emerging best practices and innovating changes in collections is the concept of early lifecycle treatment strategy. Essentially, what we are referring to is the treatment of current and non-delinquent borrowers who are exhibiting higher risk characteristics.  There are also those who are at-risk of future default at higher levels than average. The challenge is how to identify these customers for early intervention and triage in the collections strategy process. One often-overlooked tool is the use of maturation curves to identify vintages within a portfolio that is performing worse than average. A maturation curve identifies how long from origination until a vintage or segment of the portfolio reaches a normalized rate of delinquency. Let’s assume that you are launching a new credit product into the marketplace. You begin to book new loans under the program in the current month. Beyond that month, you monitor all new loans that were originated/booked during that initial time frame which we can identify as a “vintage” of the portfolio. Each month’s originations are a separate vintage or vintage analysis, and we can track the performance of each vintage over time. How many months will it take before the “portfolio” of loans booked in that initial month reach a normal level of delinquency based on these criteria: the credit quality of the portfolio and its borrowers, typical collections servicing, delinquency reporting standards, and factor of time?  The answer would certainly depend upon the aforementioned factors, and could be graphed as follows:   Exhibit 1        In Exhibit 1, we examine different vintages beginning with those loans originated during Q2 2002, and by year Q2 2008. The purpose of the analysis is to identify those vintages that have a steeper slope towards delinquency, which is also known as a delinquency maturation curve.  The X-axis represents a timeline in months, from month of origination.  Furthermore,, the Y-axis represents the 90+ delinquency rate expressed as a percentage of balances in the portfolio. Those vintages that have a steeper slope have reached a normalized level of delinquency sooner, and could in fact, have a trend line suggesting that they overshoot the expected delinquency rate for the portfolio based upon credit quality standards. So how do we use the maturation curve as a tool? In my next blog, I will discuss how to use maturation curves to identify trends across various portfolios.  I will also examine differentiate collections issues from originations or lifecycle risk management opportunities.    

Published: November 23, 2009 by Guest Contributor

In my last post I discussed the problem with confusing what I would call “real” Knowledge Based Authentication (KBA) with secret questions.   However, I don’t think that’s where the market focus should be.  Instead of looking at Knowledge Based Authentication (KBA) today, we should be looking toward the future, and the future starts with risk-based authentication. If you’re like most people, right about now you are wondering exactly what I mean by risk-based authentication.  How does it differ from Knowledge Based Authentication, and how we got from point A to point B? It is actually pretty simple.  Knowledge Based Authentication is one factor of a risk-based authentication fraud prevention strategy.  A risk- based authentication approach doesn’t rely on question/answers alone, but instead utilizes fraud models that include Knowledge Based Authentication performance as part of the fraud analytics to improve fraud detection performance.  With a risk-based authentication approach, decisioning strategies are more robust and should include many factors, including the results from scoring models. That isn’t to say that Knowledge Based Authentication isn’t an important part of a risk-based approach.  It is.  Knowledge Based Authentication is a necessity because it has gained consumer acceptance. Without some form of Knowledge Based Authentication, consumers question an organization’s commitment to security and data protection. Most importantly, consumers now view Knowledge Based Authentication as a tool for their protection; it has become a bellwether to consumers. As the bellwether, Knowledge Based Authentication has been the perfect vehicle to introduce new and more complex authentication methods to consumers, without them even knowing it.  KBA has allowed us to familiarize consumers with out-of-band authentication and IVR, and I have little doubt that it will be one of the tools to play a part in the introduction of voice biometrics to help prevent consumer fraud. Is it always appropriate to present questions to every consumer?  No, but that’s where a true risk-based approach comes into play.  Is Knowledge Based Authentication always a valuable component of a risk based authentication tool to minimize fraud losses as part of an overall approach to fraud best practices?  Absolutely; always. DING!  

Published: November 23, 2009 by Guest Contributor

--by Andrew Gulledge Definition and examples Knowledge Based Authentication (KBA) is when you ask a consumer questions to which only they should know the answer. It is designed to prevent identity theft and other kinds of third-party fraud. Examples of Knowledge Based Authentication (also known as out-of-wallet) questions include “What is your monthly car payment?:" or “What are the last four digits of your cell number?”   KBA -- and associated fraud analytics -- are an important part of your fraud best practices strategies. What makes a good KBA question? High percentage correct A good Knowledge Based Authentication question will be easy to answer for the real consumer. Thus we tend to shy away from questions for which a high percentage of consumers give the wrong answer. Using too many of these questions will contribute to false positives in your authentication process (i.e., failing a good consumer). False positives can be costly to a business, either by losing a good customer outright or by overloading your manual review queue (putting pressure on call centers, mailers, etc.). High fraud separation It is appropriate to make an exception, however, if a question with a low percentage correct tends to show good fraud detection.  (After all, most people use a handful of KBA questions during an authentication session, so you can leave a little room for error.) Look at the fraudsters who successfully get through your authentication process and see which questions they got right and which they got wrong. The Knowledge Based Authentication questions that are your best fraud detectors will have a lower percentage correct in your fraud population, compared to the overall population. This difference is called fraud separation, and is a measure of the question’s capacity to catch the bad guys. High question generability A good Knowledge Based Authentication question will also be generable for a high percentage of consumers. It’s admirable to beat your chest and say your KBA tool offers 150 different questions. But it’s a much better idea to generate a full (and diverse) question set for over 99 percent of your consumers. Some KBA vendors tout a high number of questions, but some of these can only be generated for one or two percent of the population (if that). And, while it’s nice to be able to ask for a consumer’s SCUBA certification number, this kind of question is not likely to have much effect on your overall production.    

Published: November 23, 2009 by Guest Contributor

By: Tom Hannagan Understanding RORAC and RAROC I was hoping someone would ask about these risk management terms…and someone did. The obvious answer is that the “A” and the “O” are reversed. But, there’s more to it than that. First, let’s see how the acronyms were derived. RORAC is Return on Risk-Adjusted Capital. RAROC is Risk-Adjusted Return on Capital. Both of these five-letter abbreviations are a step up from ROE. This is natural, I suppose, since ROE, meaning Return on Equity of course, is merely a three-letter profitability ratio. A serious breakthrough in risk management and profit performance measurement will have to move up to at least six initials in its abbreviation. Nonetheless, ROE is the jumping-off point towards both RORAC and RAROC. ROE is generally Net Income divided by Equity, and ROE has many advantages over Return on Assets (ROA), which is Net Income divided by Average Assets. I promise, really, no more new acronyms in this post. The calculations themselves are pretty easy. ROA tends to tell us how effectively an organization is generating general ledger earnings on its base of assets.  This used to be the most popular way of comparing banks to each other and for banks to monitor their own performance from period to period. Many bank executives in the U.S. still prefer to use ROA, although this tends to be those at smaller banks. ROE tends to tell us how effectively an organization is taking advantage of its base of equity, or risk-based capital. This has gained in popularity for several reasons and has become the preferred measure at medium and larger U.S. banks, and all international banks. One huge reason for the growing popularity of ROE is simply that it is not asset-dependent. ROE can be applied to any line of business or any product. You must have “assets” for ROA, since one cannot divide by zero. Hopefully your Equity account is always greater than zero. If not, well, lets just say it’s too late to read about this general topic. The flexibility of basing profitability measurement on contribution to Equity allows banks with differing asset structures to be compared to each other.  This also may apply even for banks to be compared to other types of businesses. The asset-independency of ROE can also allow a bank to compare internal product lines to each other. Perhaps most importantly, this permits looking at the comparative profitability of lines of business that are almost complete opposites, like lending versus deposit services. This includes risk-based pricing considerations. This would be difficult, if even possible, using ROA. ROE also tells us how effectively a bank (or any business) is using shareholders equity. Many observers prefer ROE, since equity represents the owners’ interest in the business. As we have all learned anew in the past two years, their equity investment is fully at-risk. Equity holders are paid last, compared to other sources of funds supporting the bank. Shareholders are the last in line if the going gets rough. So, equity capital tends to be the most expensive source of funds, carrying the largest risk premium of all funding options. Its successful deployment is critical to the profit performance, even the survival, of the bank. Indeed, capital deployment, or allocation, is the most important executive decision facing the leadership of any organization. So, why bother with RORAC or RAROC? In short, it is to take risks more fully into the process of risk management within the institution. ROA and ROE are somewhat risk-adjusted, but only on a point-in-time basis and only to the extent risks are already mitigated in the net interest margin and other general ledger numbers. The Net Income figure is risk-adjusted for mitigated (hedged) interest rate risk, for mitigated operational risk (insurance expenses) and for the expected risk within the cost of credit (loan loss provision). The big risk management elements missing in general ledger-based numbers include: market risk embedded in the balance sheet and not mitigated, credit risk costs associated with an economic downturn, unmitigated operational risk, and essentially all of the strategic risk (or business risk) associated with being a banking entity. Most of these risks are summed into a lump called Unexpected Loss (UL). Okay, so I fibbed about no more new acronyms. UL is covered by the Equity account, or the solvency of the bank becomes an issue. RORAC is Net Income divided by Allocated Capital. RORAC doesn’t add much risk-adjustment to the numerator, general ledger Net Income, but it can take into account the risk of unexpected loss. It does this, by moving beyond just book or average Equity, by allocating capital, or equity, differentially to various lines of business and even specific products and clients. This, in turn, makes it possible to move towards risk-based pricing at the relationship management level as well as portfolio risk management.  This equity, or capital, allocation should be based on the relative risk of unexpected loss for the different product groups. So, it’s a big step in the right direction if you want a profitability metric that goes beyond ROE in addressing risk. And, many of us do. RAROC is Risk-Adjusted Net Income divided by Allocated Capital. RAROC does add risk-adjustment to the numerator, general ledger Net Income, by taking into account the unmitigated market risk embedded in an asset or liability. RAROC, like RORAC, also takes into account the risk of unexpected loss by allocating capital, or equity, differentially to various lines of business and even specific products and clients. So, RAROC risk-adjusts both the Net Income in the numerator AND the allocated Equity in the denominator. It is a fully risk-adjusted metric or ratio of profitability and is an ultimate goal of modern risk management. So, RORAC is a big step in the right direction and RAROC would be the full step in management of risk. RORAC can be a useful step towards RAROC. RAROC takes ROE to a fully risk-adjusted metric that can be used at the entity level.  This  can also be broken down for any and all lines of business within the organization. Thence, it can be further broken down to the product level, the client relationship level, and summarized by lender portfolio or various market segments. This kind of measurement is invaluable for a highly leveraged business that is built on managing risk successfully as much as it is on operational or marketing prowess.

Published: November 19, 2009 by Guest Contributor

Round 1 – Pick your corner There seems to be two viewpoints in the market today about Knowledge Based Authentication (KBA): one positive, one negative.  Depending on the corner you choose, you probably view it as either a tool to help reduce identity theft and minimize fraud losses, or a deficiency in the management of risk and the root of all evil.  The opinions on both sides are pretty strong, and biases “for” and “against” run pretty deep. One of the biggest challenges in discussing Knowledge Based Authentication as part of an organization’s identity theft prevention program, is the perpetual confusion between dynamic out-of-wallet questions and static “secret” questions.  At this point, most people in the industry agree that static secret questions offer little consumer protection.  Answers are easily guessed, or easily researched, and if the questions are preference based (like “what is your favorite book?”) there is a good chance the consumer will fail the authentication session because they forgot the answers or the answers changed over time. Dynamic Knowledge Based Authentication, on the other hand, presents questions that were not selected by the consumer.  Questions are generated from information known about the consumer – concerning things the true consumer would know and a fraudster most likely wouldn’t know.  The questions posed during Knowledge Based Authentication sessions aren’t designed to “trick” anyone but a fraudster, though a best in class product should offer a number of features and options.  These may allow for flexible configuration of the product and deployment at multiple points of the consumer life cycle without impacting the consumer experience. The two are as different as night and day.  Do those who consider “secret questions” as Knowledge Based Authentication consider the password portion of the user name and password process as KBA, as well?  If you want to hold to strict logic and definition, one could argue that a password meets the definition for Knowledge Based Authentication, but common sense and practical use cause us to differentiate it, which is exactly what we should do with secret questions – differentiate them from true KBA. KBA can provide strong authentication or be a part of a multifactor authentication environment without a negative impact on the consumer experience.  So, for the record, when we say KBA we mean dynamic, out of wallet questions, the kind that are generated “on the fly” and delivered to a consumer via “pop quiz” in a real-time environment; and we think this kind of KBA does work.  As part of a risk management strategy, KBA has a place within the authentication framework as a component of risk- based authentication… and risk-based authentication is what it is really all about.    

Published: November 16, 2009 by Guest Contributor

Many compliance regulations such the Red Flags Rule, USA Patriot Act, and ESIGN require specific identity elements to be verified and specific high risk conditions to be detected. However, there is still much variance in how individual institutions reconcile referrals generated from the detection of high risk conditions and/or the absence of identity element verification. With this in mind, risk-based authentication, (defined in this context as the “holistic assessment of a consumer and transaction with the end goal of applying the right authentication and decisioning treatment at the right time") offers institutions a viable strategy for balancing the following competing forces and pressures: • Compliance – the need to ensure each transaction is approved only when compliance requirements are met; • Approval rates – the need to meet business goals in the booking of new accounts and the facilitation of existing account transactions; • Risk mitigation – the need to minimize fraud exposure at the account and transaction level. A flexibly-designed risk-based authentication strategy incorporates a robust breadth of data assets, detailed results, granular information, targeted analytics and automated decisioning. This allows an institution to strike a harmonious balance (or at least something close to that) between the needs to remain compliant, while approving the vast majority of applications or customer transactions and, oh yeah, minimizing fraud and credit risk exposure and credit risk modeling. Sole reliance on binary assessment of the presence or absence of high risk conditions and identity element verifications will, more often than not, create an operational process that is overburdened by manual referral queues. There is also an unnecessary proportion of viable consumers unable to be serviced by your business. Use of analytically sound risk assessments and objective and consistent decisioning strategies will provide opportunities to calibrate your process to meet today’s pressures and adjust to tomorrow’s as well.  

Published: November 16, 2009 by Keir Breitenfeld

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe