By: Kari Michel Lenders want to find new customer through more informed credit risk decisions and use new types of data relationships to cross-sell. The strategic goals of any company are to get more customers and revenue while reducing costs on the operating side and the credit loss side. Some of the ways to meet these goals are to improve operating efficiency in creating and managing credit attributes, which represent the building blocks of how lenders make customer decisions. Lenders face many challenges in leveraging data from multiple credit and non-credit sources (e.g. credit bureaus) and maintaining data attributes across multiple systems. Furthermore, a lack of access to raw data makes it difficult to create effective, predictive attributes. Simply managing the discrepancies between specifications and code can become a very time consuming effort. Maintaining a common set of attributes used in many types of scorecards and decision types often becomes difficult. As a result, there is a heavy reliance on external people and technical resources to find the right tools to try and pull the data sources and attributes together. In an ideal situation, a lender should be able to easily access raw data elements across multiple sources and aggregate the data into meaningful attributes. Experian can offer these capabilities through its Attribute Toolbox product, allowing one or more systems to access a common set of standard analytics. A set of highly predictive attributes, Premier Attributes, are available and offers a much more effective solution for managing standard attributes across an enterprise. With the use of these tools, lenders can decrease maintenance costs by quickly integrating data and analytics into existing business architecture to make profitable decisions.
By: Tom Hannagan An autonomic movement describes an action or response that occurs without conscious control. This, I fear, may be occurring at many banks right now related to their risk-based pricing and profit picture for several reasons. First, the credit risk profile of existing customers is subject to continuous change over time. This was always true to some extent. But, as we’ve seen in the latest economic recession, there can be a sizeable risk level migration if enough stress is applied. It is most obvious in the case of delinquencies and defaults, but is also occurring with customers that have performing loans. The question is: how well are we keeping up with the behind-the-scenes changes risk ratings/score ranges? The changes in relative risk levels of our clients are affecting our risk-based profit picture -- and required capital allocation -- without conscious action on our part. Second, the credit risk profile of collateral categories is also subject to change over time. Again, this is not exactly new news. But, as we’ve seen in the latest real estate meltdown and dynamics affecting the valuation of financial instruments, to name two, there can be huge changes in valuation and loss ratios. And, this occurs without making one new loan. These changes in relative loss-given-default levels are affecting our risk-based expected loss levels, risk-adjusted profit and capital allocation, in a rather autonomic manner. Third, aside from changes in risk profiles of customers and collateral types, the bank’s credit policy may change. The risk management analysis of expected credit losses is continuously (we presume) under examination and refinement by internal credit risk staff. It is certainly getting unprecedented attention by external regulators and auditors. These policy changes need to be reflected in the foundation logic of risk-based pricing and profit models. And that’s just in the world of credit risk. Fourth, there can also be changes in our operating cost structure, including mitigated operational risks, and product volumes that affect the allocation of risk-based non-interest expense to product groups and eventually to clients. Although it isn’t the fault of our clients that our cost structure is changing, for better or worse, we nonetheless expect them to bear the burden of these expenses based on the services we provide to them. Such changes need to be updated in the risk-based profit calculations. Finally, there is the market risk piece of risk management. It is possible if not likely that our ALCO policies have changed due to lessons from the liquidity crisis of 2008 or the other macro economic events of the last two years. Deposit funds may be more highly valued, for instance. There may also be some rotation in assets from lending. Or, the level of reliance on equity capital may have materially changed. In any event, we are experiencing historically low levels for the price of risk-free (treasury rate curve) funding, which affects the required spread and return on all other securities, including our fully-at-risk equity capital. These changes are occurring apart from customer transactions, but definitely affect the risk-based profit picture of each existing loan or deposit account and, therefore, every customer relationship. If any, let alone all, of the above changes are not reflected in our risk-based performance analysis and reporting, and any pricing of new or renewed services to our customers, then I believe we are involved in autonomic changes in risk-based profitability.
By:Wendy Greenawalt In my last few blogs, I have discussed how optimizing decisions can be leveraged across an organization while considering the impact those decisions have to organizational profits, costs or other business metrics. In this entry, I would like to discuss how this strategy can be used in optimizing decisions at the point of acquisition, while minimizing costs. Determining the right account terms at inception is increasingly important due to recent regulatory legislation such as the Credit Card Act. These regulations have established guidelines specific to consumer age, verification of income, teaser rates and interest rate increases. Complying with these regulations will require changes to existing processes and creation of new toolsets to ensure organizations adhere to the guidelines. These new regulations will not only increase the costs associated with obtaining new customers, but also the long term revenue and value as changes in account terms will have to be carefully considered. The cost of on-boarding and servicing individual accounts continues to escalate, and internal resources remain flat. Due to this, organizations of all sizes are looking for ways to improve efficiency and decisions while minimizing costs. Optimization is an ideal solution to this problem. Optimized strategy trees can be easily implemented into current processes and ensure lending decisions adhere to organizational revenue, growth or cost objectives as well as regulatory requirements. Optimized strategy trees enable organizations to create executable strategies that provide on-going decisions based upon optimization conducted at a consumer level. Optimized strategy trees outperform manually created trees as they are created utilizing sophisticated mathematical analysis and ensure organizational objectives are adhered to. In addition, an organization can quantify the expected ROI of a given strategy and provide validation in strategies – before implementation. This type of data is not available without the use of a sophisticated optimization software application. By implementing optimized strategy trees, organizations can minimize the volume of accounts that must be manually reviewed, which results in lower resource costs. In addition, account terms are determined based on organizational priorities leading to increased revenue, retention and profitability.
There seems to be two viewpoints in the market today about Knowledge Based Authentication (KBA): one positive, one negative. Depending on the corner you choose, you probably view it as either a tool to help reduce identity theft and minimize fraud losses, or a deficiency in the management of risk and the root of all evil. The opinions on both sides are pretty strong, and biases “for” and “against” run pretty deep. One of the biggest challenges in discussing Knowledge Based Authentication as part of an organization’s identity theft prevention program, is the perpetual confusion between dynamic out-of-wallet questions and static “secret” questions. At this point, most people in the industry agree that static secret questions offer little consumer protection. Answers are easily guessed, or easily researched, and if the questions are preference based (like “what is your favorite book?”) there is a good chance the consumer will fail the authentication session because they forgot the answers or the answers changed over time. Dynamic Knowledge Based Authentication, on the other hand, presents questions that were not selected by the consumer. Questions are generated from information known about the consumer – concerning things the true consumer would know and a fraudster most likely wouldn’t know. The questions posed during Knowledge Based Authentication sessions aren’t designed to “trick” anyone but a fraudster, though a best in class product should offer a number of features and options. These may allow for flexible configuration of the product and deployment at multiple points of the consumer life cycle without impacting the consumer experience. The two are as different as night and day. Do those who consider “secret questions” as Knowledge Based Authentication consider the password portion of the user name and password process as KBA, as well? If you want to hold to strict logic and definition, one could argue that a password meets the definition for Knowledge Based Authentication, but common sense and practical use cause us to differentiate it, which is exactly what we should do with secret questions – differentiate them from true KBA. KBA can provide strong authentication or be a part of a multifactor authentication environment without a negative impact on the consumer experience. So, for the record, when we say KBA we mean dynamic, out of wallet questions, the kind that are generated “on the fly” and delivered to a consumer via “pop quiz” in a real-time environment; and we think this kind of KBA does work. As part of a risk management strategy, KBA has a place within the authentication framework as a component of risk- based authentication… and risk-based authentication is what it is really all about.
When a client is selecting questions to use, Knowledge Based Authentication is always about the underlying data – or at least it should be. The strength of Knowledge Based Authentication questions will depend, in large part, on the strength of the data and how reliable it is. After all, if you are going to depend on Knowledge Based Authentication for part of your risk management and decisioning strategy the data better be accurate. I’ve heard it said within the industry that clients only want a system that works and they have no interest where the data originates. Personally, I think that opinion is wrong. I think it is closer to the truth to say there are those who would prefer if clients didn’t know where the data that supports their fraud models and Knowledge Based Authentication questions originates; and I think those people “encourage” clients not to ask. It isn’t a secret that many within the industry use public record data as the primary source for their Knowledge Based Authentication products, but what’s important to consider is just how accessible that public record information is. Think about that for a minute. If a vendor can build questions on public record data, can a fraudster find the answers in public record data via an online search? Using Knowledge Based Authentication for fraud account management is a delicate balance between customer experience/relationship management and risk management. Because it is so important, we believe in research – reading the research of well-known and respected groups like Pew, Tower, Javelin, etc. and doing our own research. Based on our research, I know consumers prefer questions that are appropriate and relative to their activity. In other words, if the consumer is engaged in a credit-granting activity, it may be less appropriate to ask questions centered on personal associations and relatives. Questions should be difficult for the fraudster, but not difficult or perceived as inappropriate or intrusive by the true consumer. Additionally, I think questions should be applicable to many clients and many consumers. The question set should use a mix of data sources: public, proprietary, non-credit, credit (if permissible purpose exists) and innovative. Is it appropriate to have in-depth data discussions with clients about each data source? Debatable. Is it appropriate to ensure that each client has an understanding of the questions they ask as part of Knowledge Based Authentication and where the data that supports those questions originates? Absolutely.
By: Kari Michel What is Basel II? Basel II is the international convergence of Capital Measurement and Capital Standards. It is a revised framework and is the second iteration of an international standard of laws. The purpose of Basel II is to create an international standard that banking regulators can use when creating regulations about how much capital banks need to put aside to guard against the types of financial and operations risk banks face. Basel II ultimately implements standards to assist in maintaining a healthy financial system. The business challenge The framework for Basel II compels the supervisors to ensure that banks implement credit rating techniques that represent their particular risk profile. Besides the risk inputs (Probability of Default (PD), Loss Given Default (LGD) and Exposure at Default (EAD)) calculation, the final Basel accord includes the “use test” requirement which is the requirement for a firm to use an advanced approach more widely in its business and met merely for calculation of regulatory capital. Therefore many financial institutions are required to make considerable changes in their approach to risk management (i.e. infrastructure, systems, processes, data requirements). Experian is a leading provider of risk management solutions -- products and services for the new Basel Capital Accord (Basel II). Experian’s approach includes consultancy, software, and analytics tailored to meet the lender’s Basel II requirements.
A recent January 29, 2010 article in the Wall Street Journal * discussing the repurchasing of loans by banks from Freddie Mae and Fannie Mac included a simple, yet compelling statement that I feel is worth further analysis. The article stated that "while growth in subprime defaults is slowing, defaults on prime loans are accelerating." I think this statement might come as a surprise to some who feel that there is some amount of credit risk and economic immunity for prime and super-prime consumers – many of whom are highly sought-after in today’s credit market. To support this statement, I reference a few statistics from the Experian-Oliver Wyman Market Intelligence Reports: • From Q1 2007 to Q1 2008, 30+ DPD mortgage delinquency rates for VantageScore® credit score A and B consumers remained flat (actually down 2%); while near-prime, subprime, and deep-subprime consumers experienced an increase of over 36% in 30+ rates. • From Q4 2008 to Q4 2009, 30+ DPD mortgage delinquency rates for VantageScore® credit score A and B consumers increased by 42%; whereas consumers in the lower VantageScore® credit score tiers saw their 30+ DPD rate increase by only 23% in the same period Clearly, whether through economic or some other form of impact, repayment practices of prime and super-prime, consumers have been changing as of late, and this is translating to higher delinquency rates. The call-to-action for lenders, in their financial risk management and credit risk modeling efforts, is increased attentiveness in assessing credit risk beyond just a credit score...whether this be using a combination of scores, or adding Premier Attributes into lending models – in order to fully assess each consumer’s risk profile. * http://online.wsj.com/article/SB10001424052748704343104575033543886200942.html
By: Wendy Greenawalt Marketing is typically one of the largest expenses for an organization while also being a priority to reach short and long-term growth objectives. With the current economic environment, continuing to be unpredictable many organizations have reduced budgets and focused on more risk and recovery activities. However, in the coming year we expect to see improvements and organizations renew their focus to portfolio growth. We expect that campaign budgets will continue to be much lower than what was allocated before the mortgage meltdown but organizations are still looking for gains in efficiency and response to meet business objectives. Creation of optimized marketing strategies is quick and easy when leveraging optimization technology enabling your internal resources to focus on more strategic issues. Whether your objective is to increase organizational or customer level profit, growth in specific product lines or maximizing internal resources optimization can easily identify the right solution while adhering to key business objectives. The advanced software now available enables an organization to compare multiple campaign options simultaneously while analyzing the impact of modifications to revenue, response or other business metrics. Specifically, very detailed product offer information, contact channels, timing, and letter costs from multiple vendors and consumer preferences can all be incorporated into an optimization solution. Once defined the complex mathematical algorithm factors every combination of all variables, which could range in the thousands, are considered at the consumer level to determine the optimal treatment to maximize organizational goals and constraints. In addition, by incorporating optimized decisions into marketing strategies marketers can execute campaigns in a much shorter timeframe allowing an organization to capitalize on changing market conditions and consumer behaviors. To illustrate the benefit of optimization an Experian bankcard client was able to reduced analytical time to launch programs from 7 days to 90 minutes while improving net present value. In my next blog, we will discuss how organizations can cut costs when acquiring new accounts.
By: Wendy Greenawalt Marketing is typically one of the largest expenses for an organization and it is also a priority to reach short- and long-term growth objectives. With the current economic environment continuing to be unpredictable, many organizations have reduced budgets and are focusing more on risk management and recovery activities. However, in the coming year, we expect to see improvements in the economy and organizations renewing their focus on portfolio growth. We expect that marketing campaign budgets will continue to be much lower than those allocated before the mortgage meltdown but organizations will still be looking for gains in efficiency and responsiveness to meet business objectives. Optimizing decisions, creation of optimized marketing strategies, is quick and easy when leveraging optimization technology. Those strategies enable your internal resources to focus on more strategic issues. Whether your objective is to increase organizational or customer level profit, growth in specific product lines or maximizing internal resources, optimization / optimizing decisions can easily identify the right solution while adhering to key business objectives. The advanced software now available to facilitate optimizing decisions enables an organization to compare multiple campaign options simultaneously while analyzing the impact of modifications to revenue, response or other business metrics. Specifically, very detailed product offer information, contact channels, timing, and letter costs from multiple vendors -- and consumer preferences -- can all be incorporated into an optimization solution. Once defined, the complex mathematical algorithm factors every combination of all variables, which could range in the thousands. These variables are considered at the consumer level to determine the optimal treatment to maximize organizational goals and constraints. In addition, by optimizing decisions and incorporating them into marketing strategies, marketers can execute campaigns in a much shorter timeframe allowing an organization to capitalize on changing market conditions and consumer behaviors. To illustrate the benefit of optimization: an Experian bankcard client was able to reduce analytical time to launch programs from seven days to 90 minutes while improving net present value. In my next blog, we will discuss how organizations can cut costs when acquiring new accounts.
By: Wendy Greenawalt The economy has changed drastically in the last few years and most organizations have had to reduce costs across their businesses to retain profits. Determining the appropriate cost-cutting measures requires careful consideration of trade-offs while quantifying the short- and long-term organizational priorities. Too often, cost reduction decisions are driven by dynamic market conditions, which mandate quick decision-making. Due to this, decisions are made without a sound understanding of the true impact to organizational objectives. Optimization (optimizing decisions) can be used for virtually any business problem and provides decisions based on complex mathematics. Therefore, whether you are making decisions related to outsourcing versus staffing, internal versus external project development or specific business unit cost savings opportunities, optimization can be applied. While some analytical requirements exist to obtain the highest business metric improvements, most organizations have the data available that is required to take full advantage of optimization technology. If you are using predictive models, credit attributes and have multiple actions that can be taken on an individual consumer, then, most likely, your organization can benefit from strategies in optimizing decisions. In my next few blogs, I will discuss how optimization / optimizing decisions can be used to create better strategies across an organization whether your focus is marketing, risk, customer management or collections.
By: Tom Hannagan While waiting on the compilation of fourth quarter banking industry results, I thought it might be interesting to relate the commercial real estate (CRE) risk management position facing commercial banks from the third quarter. CRE risk is an important consideration in enterprise risk management and for loan pricing and profitability. The slowdown in the global economy has affected CRE credit risk because of increased vacancy rates, halted development projects, and the loss of value affecting commercial properties. As CRE loans come up for renewal, many will find that there have equity deficits and that they are facing tightened credit standards. If a commercial property loan started life at 80 percent loan to value, and the property value has dropped 25 percent, the renewed loan balance will be down at least 25 percent, requiring a substantial net payoff from the borrower. This net cash payoff requirement would be tough to accomplish in good times and all-but-impossible for many borrowers in this economy. After all, the main reason for the decline in property value to begin with is its reduced cash flow performance. Following the third quarter numbers, total U.S. commercial real estate is generally estimated at $3.4 to $3.5 trillion. Commercial banks owned just over half of that debt, or about $1.8 trillion according to Federal Reserve and FDIC sources. The (possibly only) good news with that total is that commercial banks owned a relatively small share of the commercial-mortgage-backed securities (CMBS) slice of CRE exposure. CMBS assets were 21 percent of total CRE credit or $714 billion, but banks owned a total of $54 billion, which represented only 3 percent of total bank CRE assets. Unfortunately, the opposite is true for construction lending. U.S. banks, in total, had $486 to $534 billion (depending on the source) in construction and land loans, representing 27 percent to 30 percent of banks’ total CRE holdings. The true credit risk management picture is much more revealing if we cut the numbers by bank size. According to Deutsche Bank research, the largest 97 banks (those with over $10 billion in total assets) had $14.8 trillion in total assets and $1.0 trillion of the banking industry’s CRE credits. This amounts to about 7 percent of the total assets for this group of larger banks. The 7,500 community banks, with aggregate assets of $2 trillion, had about $786 billion in CRE lending. This amounts to about 28 percent of total assets. That is roughly four times the level of exposure found in the larger banks. The 7 percent level of credit risk average exposure at the large bank group is less than their average level of equity or risk-based capital. For the banks under the $10 billion level, the 28 percent level of CRE exposure is almost three times their average equity position. The riskiest portion of CRE lending is clearly the construction and land development loans. The subtotals in this area confirm where the cumulative risk lies. Again, according to Deutsche Bank research, the largest 97 banks had $299 billion of the banking industry’s $534 billion in construction loans. Although this is 56 percent of total bank construction lending, it amounts to only 2 percent of this group’s total assets. The 7,500 community banks had aggregate construction loans of $235 billion. This amounts to about 8.5 percent of total assets. That is a bit over four times the level of exposure found in the larger banks. The 2 percent level of construction credit risk exposure at the large bank group is one-fourth of their average level of common equity. At banks under the $10 billion level, the 8.5 percent level of CRE exposure, compared to total assets, is about the same as their average equity position. According to Moody’s, bank have already taken about $90 billion in net loan losses in CRE assets through the third quarter of 2009. That means the industry has perhaps another $150 billion in write-offs coming. This would total $240 billion in CRE credit losses for the banking industry due to this economic downturn. That would equate to 13.3 percent of the banking industry’s share of total CRE credit. With the decline in commercial property values ranging from 10 percent to 40 percent, a 13 percent loss is certainly not a worst case scenario. Banks have ramped up their loss reserves, and although the numbers aren’t out yet, we know many banks have used the fourth quarter 2009 to further bolster their allowances for loan and lease losses (ALLL). The larger the ALLL, the safer the risk-based equity account. Risk managers are aware of all of this and banks are very actively developing their strategies to handle the refunding requirements and, at the same time, be in a position to explain to regulators and external auditor how they are protecting shareholders. But the numbers are very daunting and not every bank will have enough net cash flow and risk equity to cover the inevitable losses.
My last entry covered the benefits of consortium databases and industry collaboration in general as a proven and technologically feasible method for combating fraud across industries. They help minimize fraud losses. So – with some notable exceptions – why are so few industries and companies using fraud consortiums and known fraud databases? In my experience, the reasons typically boil down to two things: reluctance to share data and perception of ROI. I say "perception of ROI" because I firmly believe the ROI is there – in fact it grows with the number of consortium participants. First, reluctance to share data seems to stem from a few areas. One is concern for how that data will be used by other consortium members. This is usually addressed through compelling reciprocation of data contribution by all members (the give to get model) as well as strict guidelines for acceptable use. In today’s climate of hypersensitivity, another concern – rightly so – is the stewardship of Personally Identifiable Information (PII). Given the potentially damaging effects of data breaches to consumers and businesses, smart companies are extremely cautious and careful when making decisions about safeguarding consumer information. So how does a data consortium deal with this? Firewalls, access control lists, encryption, and other modern security technologies provide the defenses necessary to facilitate protection of information contributed to the consortium. So, let’s assume we’ve overcome the obstacles to sharing one’s data. The other big hurdle to participation that I come across regularly is the old “what’s in it for me” question. Contributors want to be sure that they get out of it what they put into it. Nobody wants to be the only one, or the largest one, contributing records. In fact, this issue extends to intracompany consortiums as well. No line of business wants to be the sole sponsor just to have other business units come late to the party and reap all the benefits on their dime. Whether within companies or across an industry, it’s obvious that mutual funding, support, equitable operating rules, and clear communication of benefits – to those contributors both big and small – is necessary for fraud consortiums to succeed. To get there, it’s going to take a lot more interest and participation from industry leaders. What would this look like? I think we’d see a large shift in companies’ fraud columns: from “Discovered” to “Attempted”. This shift would save time and money that could be passed back to the legitimate customers. More participation would also enable consortiums to stay on top of changing technology and evolving consumer communication styles, such as email, text, mobile banking, and voice biometrics to name a few.
By: Amanda Roth Last week, we discussed how pricing with competition is important to ensure sound decision practices are being implemented in the domains of loan pricing and profitability. The extreme of pricing too high for the market can obviously be detrimental to your organization. The other extreme can be just as dangerous. Pricing for your profitability, regardless of what the competition is charging in your area, has a few potential issues associated with it regarding management of risk. For example, the statistics state you can charge 5 percent in your “A” tier and still be profitable, but the competition is charging 7.5 percent for the same tier. You may be thinking that by offering 5 percent you will attract the “best of the best” to your organization. However, what your statistics may not be showing you is the risk outside of your applicant base. If you significantly change the customers you are bringing in, does your risk increase as well, ultimately increasing the cost associated with each loan? Increased costs will reduce or even eliminate the profitability you had expected. A second potential issue is setting the expectation within the marketplace. It is often understood with the consumers that when changes occur to the interest rate at the federal level, there will be changes at their local financial institution. These changes are often very small. By undercutting your competition by such an extreme amount, your customers may question any attempts to raise rates more than 50bp, if you do experience increased costs as a result of the earlier situation or any other factors. A safer strategy would be to charge between 6.5 percent and 7 percent, which allows you to obtain some of the best customers, ensure stability within the market, and take advantage of additional profitability while it is available. This is definitely a winning strategy for all -- and an important consideration as you develop your portfolio risk management objectives.
There was a recent discussion among members of the Anti Fraud experts group on LinkedIn regarding collaboration among financial institutions to combat fraud. Most posters agreed on the benefits of such collaboration but were cynical when it came to anything of substance, such as a shared data network, getting off the ground. I happen to agree with some of the opinions on the primary challenges faced in getting cross industry (or even single industry!) cooperation to prevent both consumer and commercial fraud. Those being: 1) sharing data and 2) return on investment. Despite the challenges, there are some fraud prevention and “negative” file consortium databases available in the market as fraud prevention tools. They’re often used in conjunction with authentication products in an overall risk based authentication / fraud deterrence strategy. Some are focused on the Demand Deposit Account (DDA) market, such as Fidelity’s DebitBureau, while others, like Experian’s own National Fraud Database, address a variety of markets. Early Warning Services has a database of both “account abuse” – aka DDA financial mismanagement – and fraud records. Still others like Ethoca and the UK’s 192.com seem focused on merchant data and online retailers. Regardless of the consortium, they share some common traits. Most: - fall under Fair Credit Reporting Act regulation - are used in the acquisition phase as part of the new account decision - require contribution of data to access the shared data network Given the seemingly general reluctance to participate in fraud consortiums, as evidenced by the group described above, how do we assess value in these consortium databases? Well, for one, most U.S. banks and credit unions participate in and contribute customer behavior data to a consortium. Safe to say, then, that the banking industry has recognized the value of collaboration and sharing data with each other – if not exclusively to minimize fraud losses but at least to manage potential risk at acquisition. I’m speaking here of the DDA financial mismanagement data used under the guiding principle of “past performance predicts future results”. Consortium data that includes confirmed fraud records make the value of collaboration even more clear: a match to one of these records compels further investigation and a more cautious review of the transaction or decision. With this much to gain, why aren’t more companies and industries rushing to join or form a consortium? In my next post, I’ll explore the common objections to joining consortiums and what the future may look like.
As the economic environment changes on what feels like a daily basis, the importance of having information about consumer credit trends and the future direction of credit becomes invaluable for planning and achieving strategic goals. I recently had the opportunity to speak with members of the collections industry about collections strategy and collections change management -- and discussed the use of business intelligence data in their industry. I was surprised at how little analysis was conducted in terms of anticipating strategic changes in economic and credit factors that impact the collections business. Mostly, it seems like anecdotal information and media coverage is used to get ‘a feeling’ for the direction of the economy and thus the collections industry. Clearly, there are opportunities to understand these high-level changes in more detail and as a result, I wanted to review some business intelligence capabilities that Experian offers – and to expand on the opportunities I think exist to for collections firms to leverage data and better inform their decisions: * Experian possesses the ability to capture the entire consumer credit perspective, allowing collections firms to understand trends that consider all consumer relationships. * Within each loan type, insights are available by analyzing loan characteristics such as, number of trades, balances, revolving credit limits, trade ages, and delinquency trends. These metrics can help define market sizes, relative delinquency levels and identify segments where accounts are curing faster or more slowly, impacting collectability. * Layering in geographic detail can reveal more granular segment trends, creating segments for both macro and regional-level credit characteristics. * Experian Business Intelligence has visibility to the type of financial institution, allowing for a market by market view of credit patterns and trends. * Risk profiling by VantageScore can shed light on credit score trends, breaking down larger segments into smaller score-based segments and identifying pockets of opportunity and risk. I’ll continue to consider the opportunities for collections firms to leverage business intelligence data in subsequent blogs, where I’ll also discuss the value of credit forecasting to the collections industry.