Latest Posts

Loading...

I’ve recently been hearing a lot about how bankcard lenders are reacting to changes in legislation, and recent statistics clearly show that lenders have reduced bankcard acquisitions as they retune acquisition and account management strategies for their bankcard portfolios. At this point, there appears to be a wide-scale reset of how lenders approach the market, and one of the main questions that needs to be answered pertains to market-entry timing: Should a lender be the first to re-enter the market in a significant manner, or is it better to wait, and see how things develop before executing new credit strategies? I will dedicate my next two blogs to defining these approaches and discussing them with regard to the current bankcard market. Based on common academic frameworks, today’s lenders have the option of choosing one of the following two routes: becoming a first-mover, or choosing to take the role of a secondary or late mover. Each of these roles possess certain advantages and also corresponding risks that will dictate their strategic choices: The first-mover advantage is defined as “A sometimes insurmountable advantage gained by the first significant company to move into a new market.” (1)  Although often confused with being the first-to-market, first-mover advantage is more commonly considered for firms that first substantially enter the market. The belief is that the first mover stands to gain competitive advantages through technology, economies of scale and other avenues that result from this entry strategy. In the case of the bankcard market, current trends suggest that segments of subprime and deep-subprime consumers are currently underserved, and thus I would consider the first lender to target these customers with significant resources to have ‘first-mover’ characteristics. The second-mover to a market can also have certain advantages: the second-mover can review and assess the decisions of the first-mover and develops a strategy to take advantage of opportunities not seized by the first-mover. As well, it can learn from the mistakes of the first-mover and respond, without having to incur the cost of experiential learning and possessing superior market intelligence. So, being a first-mover and second-mover can each have its advantages and pitfalls. In my next contribution, I’ll address these issues as they pertain to lenders considering their loan origination strategies for the bankcard market. (1) http://www.marketingterms.com/dictionary/first_mover_advtanage  

Published: January 14, 2010 by Kelly Kent

Conducting a validation on historical data is a good way to evaluate fraud models; however, fraud best practices dictate that a proper validation uses properly defined fraud tags. Before you can determine if a fraud model or fraud analytics tool would have helped minimize fraud losses, you need to know what you are looking for in this category.  Many organizations have difficulty differentiating credit losses from fraud losses.  Usually, fraud losses end up lumped-in with credit losses. When this happens, the analysis either has too few “known frauds” to create a business case for change, or the analysis includes a large target population of credit losses that result in poor results. By planning carefully, you can avoid this pitfall and ensure that your validation gives you the best chance to improve your business and minimize fraud losses. As a fraud best practice for validations, consider using a target population that errs on the side of including credit losses; however, be sure to include additional variables in your sample that will allow you and your fraud analytics provider to apply various segmentations to the results.  Suggested elements to include in your sample are; delinquency status, first delinquency date, date of last valid payment, date of last bad  payment and indicator of whether the account was reviewed for fraud prior to booking. Starting with a larger population, and giving yourself the flexibility to narrow the target later will help you see the full value of the solutions you evaluate and reduce the likelihood of having to do an analysis over again.  

Published: January 13, 2010 by Chris Ryan

By: Tom Hannagan This blog has often discussed many aspects of risk-adjusted pricing for loans. Loans, with their inherent credit risk, certainly deserve a lot of attention when it comes to risk management in banking. But, that doesn’t mean you should ignore the risk management implications found in the other product lines. Enterprise risk management needs to consider all of the lines of business, and all of the products of the organization. This would include the deposit services arena. Deposits make up roughly 65 percent to 75 percent of the liability side of the balance sheet for most financial institutions, representing the lion’s share of their funding source. This is a major source of operational expense and also represents most of the bank’s interest expense. The deposit activity has operational risk, and this large funding source plays a huge role in market risk – including both interest rate risk and liquidity risk. It stands to reason that such risks are considered when pricing deposit services. Unfortunately it is not always the case. Okay, to be honest, it’s too rarely the case. This raises serious entity governance questions. How can such a large operational undertaking, not withstanding the criticality of the funding implications, not be subjected to risk-based pricing considerations? We have seen warnings already that the current low interest rate environment will not last forever. When the economy improves and rates head upwards, banks need to understand the bottom line profit implications. Deposit rate sensitivity across the various deposit types is a huge portion of the impact on net interest income. Risk-based pricing of these services should be considered before committing to provide them. Even without the credit risk implications found on the loan side of the balance sheet, there is still plenty of operational and market risk impact that needs to be taken into account from the liability side. When risk management is not considered and mitigated as part of the day-to-day management of the deposit line of business, the bank is leaving these risks completely to chance. This unmitigated risk increases the portion of overall risk that is then considered to be “unexpected” in nature and thereby increases the equity capital required to support the bank.

Published: January 12, 2010 by Guest Contributor

In a previous blog, we shared ideas for expanding the “gain” to create a successful ROI to adopt new fraud best practices  to improve.  In this post, we’ll look more closely at the “cost” side of the ROI equation. The cost of the investment- The costs of fraud analytics and tools that support fraud best practices go beyond the fees charged by the solution provider.  While the marketplace is aware of these costs, they often aren’t considered by the solution providers.  Achieving consensus on an ROI to move forward with new technology requires both parties to account for these costs.  A more robust ROI should these areas: • Labor costs- If a tool increases fraud referral rates, those costs must be taken into account. • Integration costs- Many organizations have strict requirements for recovering integration costs.  This can place an additional burden on a successful ROI. • Contractual obligations- As customers look to reduce the cost of other tools, they must be mindful of any obligations to use those tools. • Opportunity costs- Organizations do need to account for the potential impact of their fraud best practices on good customers.  Barring a true champion/challenger evaluation, a good way to do this is to remain as neutral as possible with respect to the total number of fraud alerts that are generated using new fraud tools compared to the legacy process As you can see, the challenge of creating a compelling ROI can be much more complicated than the basic equation suggests.  It is critical in many industries to begin exploring ways to augment the ROI equation.  This will ensure that our industries evolve and thrive without becoming complacent or unable to stay on top of dynamic fraud trends.  

Published: January 11, 2010 by Chris Ryan

By: Wendy Greenawalt Given the current volatile market conditions and rising unemployment rates, no industry is immune from delinquent accounts. However, recent reports have shown a shift in consumer trends and attitudes related to cellular phones. For many consumers, a cell phone is an essential tool for business and personal use, and staying connected is a very high priority. Given this, many consumers pay their cellular bill before other obligations, even if facing a poor bank credit risk. Even with this trend, cellular providers are not immune from delinquent accounts and determining the right course of action to take to improve collection rates. By applying optimization, technology for account collection decisions, cellular providers can ensure that all variables are considered given the multiple contact options available. Unlike other types of services, cellular providers have numerous options available in an attempt to collect on outstanding accounts.  This, however, poses other challenges because collectors must determine the ideal method and timing to attempt to collect while retaining the consumers that will be profitable in the long term.  Optimizing decisions can consider all contact methods such as text, inbound/outbound calls, disconnect, service limitation, timing and diversion of calls.  At the same time, providers are considering constraints such as likelihood of curing, historical consumer behavior, such as credit score trends, and resource costs/limitations.  Since the cellular industry is one of the most competitive businesses, it is imperative that it takes advantage of every tool that can improve optimizing decisions to drive revenue and retention.  An optimized strategy tree can be easily implemented into current collection processes and provide significant improvement over current processes.

Published: January 7, 2010 by Guest Contributor

A recent article in the Boston Globe talked about the lack of incentive for banks to perform wide-scale real estate loan modifications due to the lack of profitability for lenders in the current government-led program structure. The article cited a recent study by the Boston Federal Reserve that noted up to 45 percent of borrowers who receive loan modifications end up in arrears again afterwards. On the other hand, around 30 percent of borrowers cured without any external support from lenders - leading them to believe that the cost and effort required modifying delinquent loans is not a profitable or not required proposition. Adding to this, one of the study’s authors was quoted as saying “a lot of people you give assistance to would default either way or won’t default either way.” The problem that lenders face is that although they have the knowledge that certain borrowers are prone to re-default, or cure without much assistance – there has been little information available to distinguish these consumers from each other.  Segmenting these customers is the key to creating a profitable process for loan modifications, since identification of the consumer in advance will allow lenders to treat each borrower in the most efficient and profitable manner. In considering possible solutions, the opportunity exists to leverage the power of credit data, and credit attributes to create models that can profile the behaviors that lenders need to isolate. Although the rapid changes in the economy have left many lenders without a precedent behavior in which to model, the recent trend of consumers that re-default is beginning to provide lenders with correlated credit attributes to include in their models. Credit attributes were used in a recent study on strategic defaulters by the Experian-Oliver Wyman Market Intelligence Reports, and these attributes created defined segments that can assist lenders with implementing profitable loan modification policies and decisioning strategies.  

Published: January 6, 2010 by Kelly Kent

By definition, “Return on Investment” is simple: (The gain from an investment - The cost of the investment) _______________________________________________ The cost of the investment With such a simple definition, why do companies that develop fraud analytics and their customers have difficulty agreeing to move forward with new fraud models and tools?   I believe the answer lies in the definition of the factors that make up the ROI equation: “The gain from an investment”- When it comes to fraud, most vendors and customers want to focus on minimizing fraud losses.  But what happens when fraud losses are not large enough to drive change? To adopt new technology it’s necessary for the industry to expand its view of the “gain.”  One way to expand the “gain” is to identify other types of savings and opportunities that aren’t currently measured as fraud losses.  These include: Cost of other tools - Data returned by fraud tools can be used to resolve Red Flag compliance discrepancies and help fraud analysts manage high-risk accounts.  By making better use of this information, downstream costs can be avoided. Other types of “bad” organizations are beginning to look at the similarities among fraud and credit losses.  Rather than identifying a fraud trend and searching for a tool to address it, some industry leaders are taking a different approach -- let the fraud tool identify the high-risk accounts, and then see what types of behavior exist in that population.  This approach helps organizations create the business case for constant improvement and also helps them validate the way in which they currently categorize losses. To increase cross sell opportunities - Focus on the “good” populations.  False positives aren’t just filtered out of the fraud review work flow, they are routed into other work flows where relationships can be expanded.    

Published: January 4, 2010 by Chris Ryan

By: Heather Grover In my previous entry, I covered how fraud prevention affected the operational side of new DDA account opening. To give a complete picture, we need to consider fraud best practices and their impact on the customer experience. As earlier mentioned, the branch continues to be a highly utilized channel and is the place for “customized service.” In addition, for retail banks that continue to be the consumer's first point of contact, fraud detection is paramount IF we should initiate a relationship with the consumer. Traditional thinking has been that DDA accounts are secured by deposits, so little risk management policy is applied. The reality is that the DDA account can be a fraud portal into the organization’s many products. Bank consolidations and lower application volumes are driving increased competition at the branch – increased demand exists to cross-sell consumers at the point of new account opening. As a result, banks are moving many fraud checks to the front end of the process: know your customer and Red Flag guideline checks are done sooner in the process in a consolidated and streamlined fashion. This is to minimize fraud losses and meet compliance in a single step, so that the process for new account holders are processed as quickly through the system as possible. Another recent trend is the streamlining of a two day batch fraud check process to provide account holders with an immediate and final decision. The casualty of a longer process could be a consumer who walks out of your branch with a checkbook in hand – only to be contacted the next day to tell that his/her account has been shut down. By addressing this process, not only will the customer experience be improved with  increased retention, but operational costs will also be reduced. Finally, relying on documentary evidence for ID verification can be viewed by some consumers as being onerous and lengthy. Use of knowledge based authentication can provide more robust authentication while giving assurance of the consumer’s identity. The key is to use a solution that can authenticate “thin file” consumers opening DDA accounts. This means your out of wallet questions need to rely on multiple data sources – not just credit. Interactive questions can give your account holders peace of mind that you are doing everything possible to protect their identity – which builds the customer relationship…and your brand.  

Published: January 4, 2010 by Guest Contributor

By: Heather Grover In past client and industry talks, I’ve discussed the increasing importance of retail branches to the growth strategy of the bank. Branches are the most utilized channel of the bank and they tend to be the primary tool for relationship expansion. Given the face-to-face nature, the branch historically has been viewed to be a relatively low-risk channel needing little (if any) identity verification – there are less uses of robust risk-based authentication or out of wallet questions. However, a now well-established fraud best practice is the process of doing proper identity verification and fraud prevention at the point of DDA account opening. In the current environment of declining credit application volumes and approval across the enterprise, there is an increased focus on organic growth through deposits.  Doing proper vetting during DDA account openings helps bring your retail process closer in line with the rest of your organization’s identity theft prevention program. It also provides assurance and confidence that the customer can now be cross-sold and up-sold to other products. A key industry challenge is that many of the current tools used in DDA are less mature than in other areas of the organization. We see few clients in retail that are using advanced fraud analytics or fraud models to minimize fraud – and even fewer clients are using them to automate manual processes - even though more than 90 percent of DDA accounts are opened manually. A relatively simple way to improve your branch operations is to streamline your existing ID verification and fraud prevention tool set: 1. Are you using separate tools to verify identity and minimize fraud? Many providers offer solutions that can do both, which can help minimize the number of steps required to process a new account; 2. Is the solution realtime? To the extent that you can provide your new account holders with an immediate and final decision, the less time and effort you’ll spend after they leave the branch finalizing the decision; 3. Does the solution provide detail data for manual review? This can help save valuable analyst time and provider costs by limiting the need to do additional searches. In my next post, we’ll discuss how fraud prevention in DDA impacts the customer experience.

Published: December 30, 2009 by Guest Contributor

By: Amanda Roth The final level of validation for your risk-based pricing program is to validate for profitability.  Not only will this analysis build on the two previous analyses, but it will factor in the cost of making a loan based on the risk associated with that applicant.  Many organizations do not complete this crucial step.  Therefore, they may have the applicants grouped together correctly, but still find themselves unprofitable. The premise of risk-based pricing is that we are pricing to cover the cost associated with an applicant.  If an applicant has a higher probability of delinquency, we can assume there will be additional collection costs, reporting costs, and servicing costs associated with keeping this applicant in good standing.  We must understand what these cost may be, though, before we can price accordingly.  Information of this type can be difficult to determine based on the resources available to your organization.  If you aren’t able to determine the exact amount of time and costs associated with the different loans at different risk levels, there are industry best practices that can be applied. Of primary importance is to factor in the cost to originate, service and terminate a loan based on varying risk levels.  This is the only true way to validate that your pricing program is working to provide profitability to your loan portfolio.  

Published: December 28, 2009 by Guest Contributor

The definition of account management authentication is:  Keep your customers happy, but don’t lose sight of fraud risks and effective tools to combat those risks. In my previous posting, I discussed some unique fraud risks facing institutions during the account management phase of their customer lifecycles.  As a follow up, I want to review a couple of effective tools that allow you to efficiently minimize fraud losses during post-application: Knowledge Based Authentication (KBA) — this process involves the use of challenge/response questions beyond "secret" or "traditional" internally derived questions (such as mother's maiden name or last transaction amount). This tool allows for measurably effective use of questions based on more broad-reaching data (credit and noncredit) and consistent delivery of those questions without subjective question creation and grading by call center agents. KBA questions sourced from information not easily accessible by call center agents or fraudsters provide an additional layer of security that is more impenetrable by social engineering. From a process efficiency standpoint, the use of automated KBA also can reduce online sessions for consumers, and call times as agents spend less time self-selecting questions, self-grading responses and subjectively determining next steps. Delivery of KBA questions via consumer-facing online platforms or via interactive voice response (IVR) systems can further reduce operational costs since the entire KBA process can be accommodated without call center agent involvement. Negative file and fraud database – performing checks against known fraudulent and abuse records affords institutions an opportunity to, in batch or real time, check elements such as address, phone, and SSN for prior fraudulent use or victimization.  These checks are a critical element in supplementing traditional consumer authentication processes, particularly in an account management procedure in which consumer and/or account information may have been compromised.  Transaction requests such as address or phone changes to an account are particularly low-hanging fruit as far as running negative file checks are concerned.    

Published: December 28, 2009 by Keir Breitenfeld

--by Andrew Gulledge Intelligent use of features Question ordering: You want some degree of randomization in the questions that are included for each session. If a fraudster (posing as you) comes through Knowledge Based Authentication, for two or three sessions, wouldn’t you want them to answer new questions each time? At the same time, you want to try to use those questions that perform better more often. One way to achieve both is to group the questions into categories, and use a fixed category ordering (with the better-performing categories being higher up in the batting line up)—then, within each category, the question selection is randomized. This way, you can generally use the better questions more, but at the same time, make it difficult to come through Knowledge Based Authentication twice and get the same questions presented back to you. (You can also force all new questions in subsequent sessions, with a question exclusion strategy, but this can be restrictive and make the “failure to generate questions” rate spike.) Question weighting: Since we know some questions outperform others, both in terms of percentage correct and in terms of fraud separation, it is generally a good idea to weight the questions with points based on these performance metrics. Weighting can help to squeeze out some additional fraud detection from your Knowledge Based Authentication tool.  It also provides considerable flexibility in your decisioning (since it is no longer just “how many questions were answered correctly” but it is “what percentage of points were obtained”). Usage Limits: You should only allow a consumer to come through the Knowledge Based Authentication process a certain number of times before getting an auto-fail decision. This can take the form of x number of uses allowable within y number of hours/days/etc. Time out Limit: You should not allow fraudsters to research the questions in the middle of a Knowledge Based Authentication session. The real consumer should know the answers off the top of their heads. In a web environment, five minutes should be plenty of time to answer three to five questions. A call center environment should allow for more time since some people can be a bit chatty on the phone.  

Published: December 22, 2009 by Guest Contributor

Account management fraud risks: I “think” I know who I’m dealing with… Risk of fraudulent account activity does not cease once an application has been processed with even the most robust authentication products and tools available.  These are a few market dynamics are contributing to increased fraud risk to existing accounts: -          The credit crunch is impacting bad guys too! Think it’s hard to get approved for a credit account these days? The same tightened lending practices good consumers now face are also keeping fraudsters out of the “application approval” process too. While that may be a good thing in general, it has caused a migratory focus from application fraud to account takeover fraud.  -          Existing and viable accounts are now much more appealing to fraudsters given a shortage of application fraud opportunities, as financial institutions have reduced solicitation volume. A few other interesting challenges face organizations with regards to an institution’s ability to minimize fraud losses related to existing accounts: -  Social engineering — the "human element" is inherent in a call center environment and critical from a customer experience perspective. This factor offers the opportunity for fraudsters to manipulate representatives to either gain unauthorized access to accounts or, at the very least, collect consumer and account information that may help them perpetrate fraud later. - Automatic Number Identification (ANI) spoofing — this technology allows a caller to alter the true displayable number from which he or she is calling to a falsely portrayed number. It's difficult, if not impossible, to find a legitimate use for this technology. However, fraudsters find this capability quite useful as they try to circumvent what was once a very effective method of positively authenticating a consumer based on a "good" or known incoming phone number. With ANI spoofing in play, many call centers are now unable to confidently rely on this once cost-effective and impactful method of authenticating consumers.    

Published: December 21, 2009 by Keir Breitenfeld

By: Amanda Roth To refine your risk-based pricing another level, it is important to analyze where your tiers are set and determine if they are set appropriately.  (We find many of the regulators / examiners are looking for this next level of analysis.) This analysis begins with the results of the scoring model validation.  Not only will the distributions from that analysis determine if the score can predict between good and delinquent accounts, but it will also highlight which score ranges have similar delinquency rates, allowing you to group your tiers together appropriately.  After all, you do not want to have applicants with a 1 percent chance of delinquency priced the same as someone with an 8 percent chance of delinquency.  By reviewing the interval delinquency rates as well as the odds ratios, you should be able to determine where a significant enough difference occurs to warrant different pricing. You will increase the opportunity for portfolio profitability through this analysis, as you are reducing the likelihood that higher risk applicants are receiving lower pricing.  As expected, the overall risk management of the portfolio will increase when a proper risk-based pricing program is developed. In my next post we will look the final level of validation which does provide insight into pricing for profitability.  

Published: December 18, 2009 by Guest Contributor

By: Amanda Roth As discussed earlier, the validation of a risk based-pricing program can mean several different things. Let’s break these options down. The first option is to complete a validation of the scoring model being used to set the pricing for your program. This is the most basic validation of the program, and does not guarantee any insight on loan profitability expectations. A validation of this nature will help you to determine if the score being used is actually helping to determine the risk level of an applicant. This analysis is completed by using a snapshot of new booked loans received during a period of time usually 18–24 months prior to the current period. It is extremely important to view only the new booked loans taken during the time period and the score they received at the time of application. By maintaining this specific population only, you will ensure the analysis is truly indicative of the predictive nature of your score at the time you make the decision and apply the recommended risk-base pricing. By analyzing the distribution of good accounts vs. the delinquent accounts, you can determine if the score being used is truly able to separate these groups. Without acceptable separation, it would be difficult to make any decisions based on the score models, especially risk-based pricing. Although beneficial in determining whether you are using the appropriate scoring models for pricing, this analysis does not provide insight into whether your risk-based pricing program is set up correctly or not. Please join me next time to take a look at another option for this analysis.

Published: December 18, 2009 by Guest Contributor

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe