Industries

Loading...

By: Wendy Greenawalt Financial institutions have placed very little focus on portfolio growth over the last few years.  Recent market updates have provided little guidance to the future of the marketplace, but there seems to be a consensus that the US economic recovery will be slow compared to previous recessions. The latest economic indicators show that slow employment growth, continued property value fluctuations and lower consumer confidence will continue to influence the demand and issuance of new credit. However, the positive aspect is that most analysts agree that these indicators will improve over the next 12 to 24 months. Due to this, lenders should start thinking about updating acquisition strategies now and consider new tools that can help them reach their short and long-term portfolio growth goals. Most financial institutions have experienced high account delinquency levels in the past few years. These account delinquencies have had a major impact to consumer credit scores. The bad news is that the pool of qualified candidates continues to shrink so the competition for the best consumers will only increase over the next few years. Identifying target populations and improving response/booking rates will be a challenge for some time so marketers must create smarter, more tailored offers to remain competitive and strategically grow their portfolios. Recently, new scores have been created to estimate consumer income and debt ratios when combined with consumer credit data. This data can be very valuable and when combined with optimization (optimizing decisions) can provide robust acquisition strategies. Specifically, optimization / optimizing decisions allows an organization to define product offerings, contact methods, timing and consumer known preferences, as well as organizational goals such as response rates, consumer level profitability and product specific growth metrics into a software application. The optimization software will then utilize a proven mathematical technique to identify the ideal product offering and timing to meet or exceed the defined organizational goals.  The consumer level decisions can then be executed via normal channels such as mail, email or call centers. Not only does optimization software reduce campaign development time, but it also allows marketers to quantify the effectiveness of marketing campaigns – before execution. Today, optimization technology provide decision analytics accessible for organizations of almost any size and can provide an improvement over business-as-usual techniques for decisioning strategies. If your organization is looking for new tools to incorporate into existing acquisition processes, I would encourage you to consider optimization and the value it can bring to your organization.

Published: April 1, 2010 by Guest Contributor

There seems to be two viewpoints in the market today about Knowledge Based Authentication (KBA): one positive, one negative.  Depending on the corner you choose, you probably view it as either a tool to help reduce identity theft and minimize fraud losses, or a deficiency in the management of risk and the root of all evil.  The opinions on both sides are pretty strong, and biases “for” and “against” run pretty deep. One of the biggest challenges in discussing Knowledge Based Authentication as part of an organization’s identity theft prevention program, is the perpetual confusion between dynamic out-of-wallet questions and static “secret” questions.  At this point, most people in the industry agree that static secret questions offer little consumer protection.  Answers are easily guessed, or easily researched, and if the questions are preference based (like “what is your favorite book?”) there is a good chance the consumer will fail the authentication session because they forgot the answers or the answers changed over time. Dynamic Knowledge Based Authentication, on the other hand, presents questions that were not selected by the consumer.  Questions are generated from information known about the consumer – concerning things the true consumer would know and a fraudster most likely wouldn’t know.  The questions posed during Knowledge Based Authentication sessions aren’t designed to “trick” anyone but a fraudster, though a best in class product should offer a number of features and options.  These may allow for flexible configuration of the product and deployment at multiple points of the consumer life cycle without impacting the consumer experience. The two are as different as night and day.  Do those who consider “secret questions” as Knowledge Based Authentication consider the password portion of the user name and password process as KBA, as well?  If you want to hold to strict logic and definition, one could argue that a password meets the definition for Knowledge Based Authentication, but common sense and practical use cause us to differentiate it, which is exactly what we should do with secret questions – differentiate them from true KBA. KBA can provide strong authentication or be a part of a multifactor authentication environment without a negative impact on the consumer experience.  So, for the record, when we say KBA we mean dynamic, out of wallet questions, the kind that are generated “on the fly” and delivered to a consumer via “pop quiz” in a real-time environment; and we think this kind of KBA does work.  As part of a risk management strategy, KBA has a place within the authentication framework as a component of risk- based authentication… and risk-based authentication is what it is really all about.  

Published: March 5, 2010 by Guest Contributor

When a client is selecting questions to use, Knowledge Based Authentication is always about the underlying data – or at least it should be.  The strength of Knowledge Based Authentication questions will depend, in large part, on the strength of the data and how reliable it is.  After all, if you are going to depend on Knowledge Based Authentication for part of your risk management and decisioning strategy the data better be accurate.  I’ve heard it said within the industry that clients only want a system that works and they have no interest where the data originates.  Personally, I think that opinion is wrong. I think it is closer to the truth to say there are those who would prefer if clients didn’t know where the data that supports their fraud models and Knowledge Based Authentication questions originates; and I think those people “encourage” clients not to ask.  It isn’t a secret that many within the industry use public record data as the primary source for their Knowledge Based Authentication products, but what’s important to consider is just how accessible that public record information is.  Think about that for a minute.  If a vendor can build questions on public record data, can a fraudster find the answers in public record data via an online search? Using Knowledge Based Authentication for fraud account management is a delicate balance between customer experience/relationship management and risk management.  Because it is so important, we believe in research – reading the research of well-known and respected groups like Pew, Tower, Javelin, etc. and doing our own research.  Based on our research, I know consumers prefer questions that are appropriate and relative to their activity.  In other words, if the consumer is engaged in a credit-granting activity, it may be less appropriate to ask questions centered on personal associations and relatives.  Questions should be difficult for the fraudster, but not difficult or perceived as inappropriate or intrusive by the true consumer.  Additionally, I think questions should be applicable to many clients and many consumers.  The question set should use a mix of data sources: public, proprietary, non-credit, credit (if permissible purpose exists) and innovative. Is it appropriate to have in-depth data discussions with clients about each data source?  Debatable.  Is it appropriate to ensure that each client has an understanding of the questions they ask as part of Knowledge Based Authentication and where the data that supports those questions originates?  Absolutely.    

Published: March 2, 2010 by Guest Contributor

By: Kari Michel What is Basel II?  Basel II is the international convergence of Capital Measurement and Capital Standards. It is a revised framework and is the second iteration of an international standard of laws. The purpose of Basel II is to create an international standard that banking regulators can use when creating regulations about how much capital banks need to put aside to guard against the types of financial and operations risk banks face.  Basel II ultimately implements standards to assist in maintaining a healthy financial system. The business challenge The framework for Basel II compels the supervisors to ensure that banks implement credit rating techniques that represent their particular risk profile.  Besides the risk inputs (Probability of Default (PD), Loss Given Default (LGD) and Exposure at Default (EAD)) calculation, the final Basel accord includes the “use test” requirement which is the requirement for a firm to use an advanced approach more widely in its business and met merely for calculation of regulatory capital. Therefore many financial institutions are required to make considerable changes in their approach to risk management (i.e. infrastructure, systems, processes, data requirements).  Experian is a leading provider of risk management solutions -- products and services for the new Basel Capital Accord (Basel II).  Experian’s approach includes consultancy, software, and analytics tailored to meet the lender’s Basel II requirements.  

Published: February 26, 2010 by Guest Contributor

A recent January 29, 2010 article in the Wall Street Journal * discussing the repurchasing of loans by banks from Freddie Mae and Fannie Mac included a simple, yet compelling statement that I feel is worth further analysis. The article stated that "while growth in subprime defaults is slowing, defaults on prime loans are accelerating." I think this statement might come as a surprise to some who feel that there is some amount of credit risk and economic immunity for prime and super-prime consumers – many of whom are highly sought-after in today’s credit market. To support this statement, I reference a few statistics from the Experian-Oliver Wyman Market Intelligence Reports: • From Q1 2007 to Q1 2008, 30+ DPD mortgage delinquency rates for VantageScore® credit score A and B consumers remained flat (actually down 2%); while near-prime, subprime, and deep-subprime consumers experienced an increase of over 36% in 30+ rates. • From Q4 2008 to Q4 2009, 30+ DPD mortgage delinquency rates for VantageScore® credit score A and B consumers increased by 42%; whereas consumers in the lower VantageScore® credit score tiers saw their 30+ DPD rate increase by only 23% in the same period Clearly, whether through economic or some other form of impact, repayment practices of prime and super-prime, consumers have been changing as of late, and this is translating to higher delinquency rates. The call-to-action for lenders, in their financial risk management and credit risk modeling efforts, is increased attentiveness in assessing credit risk beyond just a credit score...whether this be using a combination of scores, or adding Premier Attributes into lending models – in order to fully assess each consumer’s risk profile. *  http://online.wsj.com/article/SB10001424052748704343104575033543886200942.html

Published: February 23, 2010 by Kelly Kent

By: Wendy Greenawalt Marketing is typically one of the largest expenses for an organization while also being a priority to reach short and long-term growth objectives. With the current economic environment, continuing to be unpredictable many organizations have reduced budgets and focused on more risk and recovery activities. However, in the coming year we expect to see improvements and organizations renew their focus to portfolio growth. We expect that campaign budgets will continue to be much lower than what was allocated before the mortgage meltdown but organizations are still looking for gains in efficiency and response to meet business objectives. Creation of optimized marketing strategies is quick and easy when leveraging optimization technology enabling your internal resources to focus on more strategic issues. Whether your objective is to increase organizational or customer level profit, growth in specific product lines or maximizing internal resources optimization can easily identify the right solution while adhering to key business objectives. The advanced software now available enables an organization to compare multiple campaign options simultaneously while analyzing the impact of modifications to revenue, response or other business metrics. Specifically, very detailed product offer information, contact channels, timing, and letter costs from multiple vendors and consumer preferences can all be incorporated into an optimization solution. Once defined the complex mathematical algorithm factors every combination of all variables, which could range in the thousands, are considered at the consumer level to determine the optimal treatment to maximize organizational goals and constraints. In addition, by incorporating optimized decisions into marketing strategies marketers can execute campaigns in a much shorter timeframe allowing an organization to capitalize on changing market conditions and consumer behaviors. To illustrate the benefit of optimization an Experian bankcard client was able to reduced analytical time to launch programs from 7 days to 90 minutes while improving net present value. In my next blog, we will discuss how organizations can cut costs when acquiring new accounts.  

Published: February 22, 2010 by Guest Contributor

By: Wendy Greenawalt Marketing is typically one of the largest expenses for an organization and it is also a priority to reach short- and long-term growth objectives. With the current economic environment continuing to be unpredictable, many organizations have reduced budgets and are focusing more on risk management and recovery activities. However, in the coming year, we expect to see improvements in the economy and organizations renewing their focus on portfolio growth. We expect that marketing campaign budgets will continue to be much lower than those allocated before the mortgage meltdown but organizations will still be looking for gains in efficiency and responsiveness to meet business objectives. Optimizing decisions, creation of optimized marketing strategies, is quick and easy when leveraging optimization technology.  Those strategies enable your internal resources to focus on more strategic issues. Whether your objective is to increase organizational or customer level profit, growth in specific product lines or maximizing internal resources, optimization / optimizing decisions can easily identify the right solution while adhering to key business objectives. The advanced software now available to facilitate optimizing decisions enables an organization to compare multiple campaign options simultaneously while analyzing the impact of modifications to revenue, response or other business metrics. Specifically, very detailed product offer information, contact channels, timing, and letter costs from multiple vendors -- and consumer preferences -- can all be incorporated into an optimization solution. Once defined, the complex mathematical algorithm factors every combination of all variables, which could range in the thousands.  These variables are considered at the consumer level to determine the optimal treatment to maximize organizational goals and constraints. In addition, by optimizing decisions and incorporating them into marketing strategies, marketers can execute campaigns in a much shorter timeframe allowing an organization to capitalize on changing market conditions and consumer behaviors. To illustrate the benefit of optimization: an Experian bankcard client was able to reduce analytical time to launch programs from seven days to 90 minutes while improving net present value. In my next blog, we will discuss how organizations can cut costs when acquiring new accounts.  

Published: February 22, 2010 by Guest Contributor

By: Wendy Greenawalt The economy has changed drastically in the last few years and most organizations have had to reduce costs across their businesses to retain profits. Determining the appropriate cost-cutting measures requires careful consideration of trade-offs while quantifying the short- and long-term organizational priorities.  Too often, cost reduction decisions are driven by dynamic market conditions, which mandate quick decision-making. Due to this, decisions are made without a sound understanding of the true impact to organizational objectives. Optimization (optimizing decisions) can be used for virtually any business problem and provides decisions based on complex mathematics. Therefore, whether you are making decisions related to outsourcing versus staffing, internal versus external project development or specific business unit cost savings opportunities, optimization can be applied. While some analytical requirements exist to obtain the highest business metric improvements, most organizations have the data available that is required to take full advantage of optimization technology.  If you are using predictive models, credit attributes and have multiple actions that can be taken on an individual consumer, then, most likely, your organization can benefit from strategies in optimizing decisions. In my next few blogs, I will discuss how optimization / optimizing decisions can be used to create better strategies across an organization whether your focus is marketing, risk, customer management or collections.  

Published: February 19, 2010 by Guest Contributor

My last entry covered the benefits of consortium databases and industry collaboration in general as a proven and technologically feasible method for combating fraud across industries.  They help minimize fraud losses.  So – with some notable exceptions – why are so few industries and companies using fraud consortiums and known fraud databases? In my experience, the reasons typically boil down to two things: reluctance to share data and perception of ROI.  I say "perception of ROI" because I firmly believe the ROI is there – in fact it grows with the number of consortium participants. First, reluctance to share data seems to stem from a few areas. One is concern for how that data will be used by other consortium members.  This is usually addressed through compelling reciprocation of data contribution by all members (the give to get model) as well as strict guidelines for acceptable use. In today’s climate of hypersensitivity, another concern – rightly so – is the stewardship of Personally Identifiable Information (PII).  Given the potentially damaging effects of data breaches to consumers and businesses, smart companies are extremely cautious and careful when making decisions about safeguarding consumer information.  So how does a data consortium deal with this?  Firewalls, access control lists, encryption, and other modern security technologies provide the defenses necessary to facilitate protection of information contributed to the consortium. So, let’s assume we’ve overcome the obstacles to sharing one’s data.  The other big hurdle to participation that I come across regularly is the old “what’s in it for me” question.  Contributors want to be sure that they get out of it what they put into it.  Nobody wants to be the only one, or the largest one, contributing records. In fact, this issue extends to intracompany consortiums as well.  No line of business wants to be the sole sponsor just to have other business units come late to the party and reap all the benefits on their dime.  Whether within companies or across an industry, it’s obvious that mutual funding, support, equitable operating rules, and clear communication of benefits – to those contributors both big and small – is necessary for fraud consortiums to succeed. To get there, it’s going to take a lot more interest and participation from industry leaders.  What would this look like? I think we’d see a large shift in companies’ fraud columns: from “Discovered” to “Attempted”.  This shift would save time and money that could be passed back to the legitimate customers.  More participation would also enable consortiums to stay on top of changing technology and evolving consumer communication styles, such as email, text, mobile banking, and voice biometrics to name a few.  

Published: February 8, 2010 by Matt Ehrlich

There was a recent discussion among members of the Anti Fraud experts group on LinkedIn regarding collaboration among financial institutions to combat fraud.  Most posters agreed on the benefits of such collaboration but were cynical when it came to anything of substance, such as a shared data network, getting off the ground.  I happen to agree with some of the opinions on the primary challenges faced in getting cross industry (or even single industry!) cooperation to prevent both consumer and commercial fraud.  Those being: 1) sharing data and 2) return on investment. Despite the challenges, there are some fraud prevention and “negative” file consortium databases available in the market as fraud prevention tools.  They’re often used in conjunction with authentication products in an overall risk based authentication / fraud deterrence strategy. Some are focused on the Demand Deposit Account (DDA) market, such as Fidelity’s DebitBureau, while others, like Experian’s own National Fraud Database, address a variety of markets.  Early Warning Services has a database of both “account abuse” – aka DDA financial mismanagement – and fraud records.  Still others like Ethoca and the UK’s 192.com seem focused on merchant data and online retailers. Regardless of the consortium, they share some common traits.  Most: - fall under Fair Credit Reporting Act regulation - are used in the acquisition phase as part of the new account decision - require contribution of data to access the shared data network Given the seemingly general reluctance to participate in fraud consortiums, as evidenced by the group described above, how do we assess value in these consortium databases?  Well, for one, most U.S. banks and credit unions participate in and contribute customer behavior data to a consortium.  Safe to say, then, that the banking industry has recognized the value of collaboration and sharing data with each other – if not exclusively to minimize fraud losses but at least to manage potential risk at acquisition.  I’m speaking here of the DDA financial mismanagement data used under the guiding principle of “past performance predicts future results”. Consortium data that includes confirmed fraud records make the value of collaboration even more clear: a match to one of these records compels further investigation and a more cautious review of the transaction or decision.  With this much to gain, why aren’t more companies and industries rushing to join or form a consortium? In my next post, I’ll explore the common objections to joining consortiums and what the future may look like.  

Published: February 5, 2010 by Matt Ehrlich

As the economic environment changes on what feels like a daily basis, the importance of having information about consumer credit trends and the future direction of credit becomes invaluable for planning and achieving strategic goals. I recently had the opportunity to speak with members of the collections industry about collections strategy and collections change management -- and discussed the use of business intelligence data in their industry. I was surprised at how little analysis was conducted in terms of anticipating strategic changes in economic and credit factors that impact the collections business. Mostly, it seems like anecdotal information and media coverage is used to get ‘a feeling’ for the direction of the economy and thus the collections industry. Clearly, there are opportunities to understand these high-level changes in more detail and as a result, I wanted to review some business intelligence capabilities that Experian offers – and to expand on the opportunities I think exist to for collections firms to leverage data and better inform their decisions: * Experian possesses the ability to capture the entire consumer credit perspective, allowing collections firms to understand trends that consider all consumer relationships. * Within each loan type, insights are available by analyzing loan characteristics such as, number of trades, balances, revolving credit limits, trade ages, and delinquency trends. These metrics can help define market sizes, relative delinquency levels and identify segments where accounts are curing faster or more slowly, impacting collectability. * Layering in geographic detail can reveal more granular segment trends, creating segments for both macro and regional-level credit characteristics. * Experian Business Intelligence has visibility to the type of financial institution, allowing for a market by market view of credit patterns and trends. * Risk profiling by VantageScore can shed light on credit score trends, breaking down larger segments into smaller score-based segments and identifying pockets of opportunity and risk. I’ll continue to consider the opportunities for collections firms to leverage business intelligence data in subsequent blogs, where I’ll also discuss the value of credit forecasting to the collections industry.  

Published: February 1, 2010 by Kelly Kent

By: Ken Pruett I thought it might be helpful to give an example of a recent performance monitoring engagement to show just how the performance monitoring process can help.  The organization to which I'm referring has been using Knowledge Based Authentication for several years. They are issuing retail credit cards for their online channel. This is an area that usually experiences a higher rate of fraud.  The Knowledge Based Authentication product is used prior to credit being issued. The performance monitoring process involved the organization providing us with a sample of approximately 120,000 records of which some were good and some were bad.  Analysis showed that they had a 25 percent referral rate -- but they were concerned about the number of frauds they were catching.  They felt that too many frauds were getting through; they believed the fraud process was probably too lenient. Based on their input, we started a detailed analytic exercise with the intention, of course, to minimize fraud losses.  Our study found that, by changing several criteria items with the set-up, the organization was able to get the tool to be more in-line with expectations.  So, by lowering the pass rate by only 9 percent they increased their fraud find rate by 27 percent.  This was much more in-line with their goals for this process. In this situation, a score was being used, in combination with the organization's customer's ability to answer questions, to determine the overall accept or refer decision.  The change to the current set-up involved requiring customers to answer at least one more question in combination with certain scores.  Although the change was minor in nature, it yielded fairly significant results. Our next step in the engagement involved looking at the questions. Analysis showed that some questions should be eliminated due to poor performance.  They were not really separating fraud; so, removing them would be beneficial to the overall process.  We also determined that some questions performed very well.  We recommended that these questions should carry a higher weight in the overall decision process.  An example would be that a customer be required to answer only two questions correct for the higher weighted questions versus three of the lesser performing questions.  The key here is to help keep pass rates up while still preventing fraud.  Striking this delicate balance is the key objective. As you can see from this example, this is an ongoing process, but the value in that process is definitely worth the time and effort.

Published: January 29, 2010 by Guest Contributor

We've recently discussed management of risk, collections strategy, credit attributes, and the like for the bank card, telco, and real estate markets. This blog will provide insights into the trends of the automotive finance market as of third quarter 2009.  In terms of credit quality, the market has been relatively steady in year-over-year comparisons.  The subprime group saw the biggest change in risk distribution from 3Q08, with a -3.74 percent shift. Overall, balances have declined to just over $673 billion (- 4 percent).  In 3Q09, banks held the largest total of outstanding automotive balances of $241 billion (with captive auto next at $203 billion).  Credit unions had the largest increase from 3Q08 (with $5 billion) and the finance/other group had the largest decrease in balances (- $23 billion). How are automotive loans performing?  Total 30- and 60-day delinquencies are still on the rise, but the rate of increase of 30-day delinquencies appears to be slowing.   New originations are dominating in the Prime plus market (66 percent), up by 10 percent.  Lending criteria has tightened and, as a result, we see scores on both new and used vehicles continue to increase.  For new buyers, over 83 percent are Prime plus.  For used buyers, over 53 percent are Prime plus.  The average credit score changed from 762 in 3Q08 to 775 in 3Q09 -- up 13 points for new vehicles.  For used vehicles in the same time period: 670 to 684, up 14 points. Lastly, let’s take a look at how financing has changed from 3Q08 to 3Q09.  The financed amounts and monthly payments have dropped year-over-year as well as the average term and average rate. Source:  State of the Automotive Finance Market, Third Quarter 2009 by Melinda Zabritski, director of Automotive Credit at Experian and Experian-Oliver Wyman Market Intelligence Reports    

Published: January 29, 2010 by Guest Contributor

Meat and potatoes Data are the meat and potatoes of fraud detection.  You can have the brightest and most capable statistical modeling team in the world.  But if they have crappy data, they will build crappy models.  Fraud prevention models, predictive scores, and decisioning strategies in general are only as good as the data upon which they are built. How do you measure data performance? If a key part of my fraud risk strategy deals with the ability to match a name with an address, for example, then I am going to be interested in overall coverage and match rate statistics.  I will want to know basic metrics like how many records I have in my database with name and address populated.  And how many addresses do I typically have for consumers?  Just one, or many?  I will want to know how often, on average, we are able to match a name with an address.  It doesn’t do much good to tell you your name and address don’t match when, in reality, they do. With any fraud product, I will definitely want to know how often we can locate the consumer in the first place.  If you send me a name, address, and social security number, what is the likelihood that I will be able to find that particular consumer in my database?  This process of finding a consumer based on certain input data (such as name and address) is called pinning.  If you have incomplete or stale data, your pin rate will undoubtedly suffer.  And my fraud tool isn’t much good if I don’t recognize many of the people you are sending me. Data need to be fresh.  Old and out-of-date information will hurt your strategies, often punishing good consumers.  Let’s say I moved one year ago, but your address data are two-years old, what are the chances that you are going to be able to match my name and address?  Stale data are yucky. Quality Data = WIN It is all too easy to focus on the more sexy aspects of fraud detection (such as predictive scoring, out of wallet questions, red flag rules, etc.) while ignoring the foundation upon which all of these strategies are built.  

Published: January 20, 2010 by Guest Contributor

In a continuation of my previous entry, I’d like to take the concept of the first-mover and specifically discuss the relevance of this to the current bank card market. Here are some statistics to set the stage: • Q2 2009 bankcard origination levels are now at 54 percent of Q2 2008 levels • In Q2 2009, bankcard originations for subprime and deep-subprime were down 63 percent from Q2 2008 • New average limits for bank cards are down 19 percent in Q2 2009 from peak in Q3 2008 • Total unused limits continued to decline in Q3 2009, decreasing by  $100 billion in Q3 2009 Clearly, the bank card market is experiencing a decline in credit supply, along with deterioration of credit performance and problematic delinquency trends, and yet in order to grow, lenders are currently determining the timing and manner in which to increase their presence in this market. In the following points, I’ll review just a few of the opportunities and risks inherent in each area that could dictate how this occurs. Lender chooses to be a first-mover: • Mining for gold – lenders currently have an opportunity to identify long-term profitable segments within larger segments of underserved consumers. Credit score trends show a number of lower-risk consumers falling to lower score tiers, and within this segment, there will be consumers who represent highly profitable relationships. Early movers have the opportunity to access these consumers with unrealized creditworthiness at their most receptive moment, and thus have the ability to achieve extraordinary profits in underserved segments. • Low acquisition costs – The lack of new credit flowing into the market would indicate a lack of competitiveness in the bank card acquisitions space. As such, a first-mover would likely incur lower acquisitions costs as consumers have fewer options and alternatives to consider. • Adverse selection - Given the high utilization rates of many consumers, lenders could face an abnormally high adverse selection issue, where a large number of the most risky consumers are likely to accept offers to access much needed credit – creating risk management issues. • Consumer loyalty – Whether through switching costs or loyalty incentives, first-movers have an opportunity to achieve retention benefits from the development of new client relationships in a vacant competitive space. Lender chooses to be a secondary or late-mover: • Reduced risk by allowing first-mover to experience growing pains before entry. The implementation of new acquisitions and risk-based pricing management techniques with new bank card legislation will not be perfected immediately. Second-movers will be able to read and react to the responses to first movers’ strategies (measuring delinquency levels in new subprime segments) and refine their pricing and policy approaches. • One of the most common first-mover advantages is the presence of switching costs by the customer. With minimal switching costs in place in the bank card industry, the ability for second-movers to deal with an incumbent is not one where switching costs are significant issues – second-movers would be able to steal market share with relative ease. • Cherry-picked opportunities – as noted above, many previously attractive consumers will have been engaged by the first-mover, challenging the second-mover to find remaining attractive segments within the market. For instance, economic deterioration has resulted in short-term joblessness for some consumers who might be strong credit risks, given the return of capacity to repay. Once these consumers are mined by the first-mover, the second-mover will likely incur greater costs to acquire these clients. Whether lenders choose to be first to market, or follow as a second-mover, there are profitable opportunities and risk management challenges associated with each strategy.  Academics and bloggers continue to debate the merits of each, (1)  but it is the ultimately lenders of today that will provide the proof.   [1] http://www.fastcompany.com/magazine/38/cdu.html  

Published: January 18, 2010 by Kelly Kent

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe