Uncategorized

Loading...

By: Tom Hannagan Understanding RORAC and RAROC I was hoping someone would ask about these risk management terms…and someone did. The obvious answer is that the “A” and the “O” are reversed. But, there’s more to it than that. First, let’s see how the acronyms were derived. RORAC is Return on Risk-Adjusted Capital. RAROC is Risk-Adjusted Return on Capital. Both of these five-letter abbreviations are a step up from ROE. This is natural, I suppose, since ROE, meaning Return on Equity of course, is merely a three-letter profitability ratio. A serious breakthrough in risk management and profit performance measurement will have to move up to at least six initials in its abbreviation. Nonetheless, ROE is the jumping-off point towards both RORAC and RAROC. ROE is generally Net Income divided by Equity, and ROE has many advantages over Return on Assets (ROA), which is Net Income divided by Average Assets. I promise, really, no more new acronyms in this post. The calculations themselves are pretty easy. ROA tends to tell us how effectively an organization is generating general ledger earnings on its base of assets.  This used to be the most popular way of comparing banks to each other and for banks to monitor their own performance from period to period. Many bank executives in the U.S. still prefer to use ROA, although this tends to be those at smaller banks. ROE tends to tell us how effectively an organization is taking advantage of its base of equity, or risk-based capital. This has gained in popularity for several reasons and has become the preferred measure at medium and larger U.S. banks, and all international banks. One huge reason for the growing popularity of ROE is simply that it is not asset-dependent. ROE can be applied to any line of business or any product. You must have “assets” for ROA, since one cannot divide by zero. Hopefully your Equity account is always greater than zero. If not, well, lets just say it’s too late to read about this general topic. The flexibility of basing profitability measurement on contribution to Equity allows banks with differing asset structures to be compared to each other.  This also may apply even for banks to be compared to other types of businesses. The asset-independency of ROE can also allow a bank to compare internal product lines to each other. Perhaps most importantly, this permits looking at the comparative profitability of lines of business that are almost complete opposites, like lending versus deposit services. This includes risk-based pricing considerations. This would be difficult, if even possible, using ROA. ROE also tells us how effectively a bank (or any business) is using shareholders equity. Many observers prefer ROE, since equity represents the owners’ interest in the business. As we have all learned anew in the past two years, their equity investment is fully at-risk. Equity holders are paid last, compared to other sources of funds supporting the bank. Shareholders are the last in line if the going gets rough. So, equity capital tends to be the most expensive source of funds, carrying the largest risk premium of all funding options. Its successful deployment is critical to the profit performance, even the survival, of the bank. Indeed, capital deployment, or allocation, is the most important executive decision facing the leadership of any organization. So, why bother with RORAC or RAROC? In short, it is to take risks more fully into the process of risk management within the institution. ROA and ROE are somewhat risk-adjusted, but only on a point-in-time basis and only to the extent risks are already mitigated in the net interest margin and other general ledger numbers. The Net Income figure is risk-adjusted for mitigated (hedged) interest rate risk, for mitigated operational risk (insurance expenses) and for the expected risk within the cost of credit (loan loss provision). The big risk management elements missing in general ledger-based numbers include: market risk embedded in the balance sheet and not mitigated, credit risk costs associated with an economic downturn, unmitigated operational risk, and essentially all of the strategic risk (or business risk) associated with being a banking entity. Most of these risks are summed into a lump called Unexpected Loss (UL). Okay, so I fibbed about no more new acronyms. UL is covered by the Equity account, or the solvency of the bank becomes an issue. RORAC is Net Income divided by Allocated Capital. RORAC doesn’t add much risk-adjustment to the numerator, general ledger Net Income, but it can take into account the risk of unexpected loss. It does this, by moving beyond just book or average Equity, by allocating capital, or equity, differentially to various lines of business and even specific products and clients. This, in turn, makes it possible to move towards risk-based pricing at the relationship management level as well as portfolio risk management.  This equity, or capital, allocation should be based on the relative risk of unexpected loss for the different product groups. So, it’s a big step in the right direction if you want a profitability metric that goes beyond ROE in addressing risk. And, many of us do. RAROC is Risk-Adjusted Net Income divided by Allocated Capital. RAROC does add risk-adjustment to the numerator, general ledger Net Income, by taking into account the unmitigated market risk embedded in an asset or liability. RAROC, like RORAC, also takes into account the risk of unexpected loss by allocating capital, or equity, differentially to various lines of business and even specific products and clients. So, RAROC risk-adjusts both the Net Income in the numerator AND the allocated Equity in the denominator. It is a fully risk-adjusted metric or ratio of profitability and is an ultimate goal of modern risk management. So, RORAC is a big step in the right direction and RAROC would be the full step in management of risk. RORAC can be a useful step towards RAROC. RAROC takes ROE to a fully risk-adjusted metric that can be used at the entity level.  This  can also be broken down for any and all lines of business within the organization. Thence, it can be further broken down to the product level, the client relationship level, and summarized by lender portfolio or various market segments. This kind of measurement is invaluable for a highly leveraged business that is built on managing risk successfully as much as it is on operational or marketing prowess.

Published: November 19, 2009 by Guest Contributor

By: Kari Michel The U.S. government and mortgage lenders have developed various loan modification programs to help homeowners better manage their mortgage debt so that they can meet their monthly payment obligations. Given these new programs, what is the impact to the consumer’s score? Do consumer scores drop more if they work with their lenders to get their mortgage loan restructured or if they file for bankruptcy? The finding from a study conducted by VantageScore® Solutions* reveals that a delinquency on a mortgage has a greater impact on the consumer’s score than a loan modification. Bankruptcy, short sale, and foreclosure have the greatest impact to a score. A bankruptcy or poor bankruptcy score can negatively impact a consumer for a minimum of seven years with a potential score decrease of 365 points. However, with a loan modification, consumers can rehabilitate their scores to an acceptable risk level within nine months.  This depends on them bringing all their delinquent accounts to current status. Loan modifications have little impact on their consumer credit score and the influence on their score can range from a 20 point decrease to an increase of 30 points. Lenders should proactively seek out a mortgage loan modification before consumers experience severe delinquency in their credit files and credit score trends. The restructured mortgage should provide sufficient cash availability to remain with the consumer.  This ensures that any other delinquent debts can be updated to current status. Whenever possible, bankruptcy should be avoided because it has the greatest consequences for the lender and the consumer. *For more detailed information on this study, Credit Scoring and Mortgage Modifications: What lenders need to know, please click on this link to access an archived file of a recent webinar:  http://register.sourcemediaconferences.com/click/clickReg.cfm?URLID=5258

Published: November 16, 2009 by Guest Contributor

The value of a good decision can generate $150 or more in customer net present value, while the cost of a bad decision can cost you $1,000 or more.  For example, acquiring a new and profitable customer by making good prospecting and approval and pricing decisions and decisioning strategies may generate $150 or much more in customer net present value and help you increase net interest margin and other key metrics.  While the cost of a bad decision (such as approving a fraudulent applicant or inappropriately extending credit that ultimately results in a charge-off) can cost you $1,000 or more. Why is risk management decisioning important? This issue is critical because average-sized financial institutions or telecom carriers make as many as eight million customer decisions each year (more than 20,000 per day!).  To add to that, very large financial institutions make as many as 50 billion customer decisions annually.  By optimizing decisions, even a small 10-to-15 percent improvement in the quality of these customer life cycle decisions can generate substantial business benefit. Experian recommends that clients examine the types of decisioning strategies they leverage across the customer life cycle, from prospecting and acquisition, to customer management and collections.  By examining each type of decision, you can identify those opportunities for improvement that will deliver the greatest return on investment by leveraging credit risk attributes, credit risk modeling, predictive analytics and decision-management software.        

Published: November 13, 2009 by Roger Ahern

Well, here we are nearly at the beginning of November and the Red Flags Rule has been with us for nearly two years and the FTC’s November 1, 2009 enforcement date is upon us as well (I know I’ve said that before).  There is little value in me chatting about the core requirements of the Red Flags Rule at this point.  Instead, I’d like to shed some light on what we are seeing and hearing these days from our clients and industry experts related to this initiative: Red Flags Rule responses clients 1. Most clients have a solid written and operational Identity Theft Prevention Program in place that arguably meets their interpretation of the Red Flags Rule requirements. 2. Most clients have a solid written and operational Identity Theft Prevention Program in place that creates a boat-load of referrals due to the address mismatches generated in their process(es) and the requirement to do something with them. 3. Most clients are now focusing on ways in which to reduce the number of referrals generated and procedures to clear the remaining referrals via a cost-effective and automated manner…of course, while preventing fraud and staying compliant to Red Flags Rule. In 2008, a key focus at Experian was to help educate the market around the Red Flags Rule concepts and requirements. The concentration in 2009 has nearly fully shifted to assisting the market in creating risk-based authentication programs that leverage holistic views of a consumer, flexible tools that are pointed to a consumer based on that person’s authentication and risk profile. There is also an overall decisioning strategy that balances risk, compliance, and resource constraints. Spirit of Red Flags Rule The spirit of the Red Flags Rule is intended to ensure all covered institutions are employing basic identity theft prevention procedures (a pretty good idea).  I believe most of these institutions (even those that had very robust programs in place years before the rule was introduced) can appreciate this requirement that brings all institutions up to speed.  It is now, however, a matter of managing process within the realities of, and costs associated with, manpower, IT resources, and customer experience sensitivities.  

Published: November 2, 2009 by Keir Breitenfeld

Recent findings on vintage analysis Source: Experian-Oliver Wyman Market Intelligence Reports Analyzing recent vintage analysis provides insights gleaned from cursory review Analyzing recent trends from vintages published in the Experian-Oliver Wyman Market Intelligence Reports, there are numerous insights that can be gleaned from just a cursory review of the results. Mortgage vintage analysis trends As noted in an earlier posting, recent mortgage vintage analysis' show a broad range of behaviors between more recent vintages and older, more established vintages that were originated before the significant run-up of housing prices seen in the middle of the decade. The 30+ delinquency levels for mortgage vintages in 2005, 2006, and 2007 approach and in two cases exceed 10 percent of trades in the last 12 months of performance, and have spiked from historical trends, beginning almost immediately after origination. On the other end of the spectrum, the vintages from 2003 and 2002 have barely approached or exceeded 5 percent for the last 6 or 7 years. Bandcard vintage analysis trends As one would expect, the 30+ delinquency trends demonstrated within bankcard vintage analysis are vastly different from the trends of mortgage vintages. Firstly, card delinquencies show a clear seasonal trend, with a more consistent yearly pattern evident in all vintages, resulting from the revolving structure of the product. The most interesting trends within the card vintages do show that the more recent vintages, 2005 to 2008, display higher 30+ delinquency levels, especially the Q2 2007 vintage, which is far and away the underperformer of the group. Within each vintage pool, an analysis can extend into the risk distribution and details of the portfolio and further segment the pool by credit score, specifically the VantageScore® credit score.  In other words, the loans in this pool are only for the most creditworthy customers at the time of origination. The noticeable trend is that while these consumers were largely resistant to deteriorating economic conditions, each vintage segment has seen a spike in the most recent 9-12 months. Given that these consumers tend to have the highest limits and lowest utilization of any VantageScore® credit score band, this trend encourages further account management consideration and raises flags about overall bankcard performance in coming months. Even a basic review of vintage analysis pools and the subsequent analysis opportunities that result from this data can be extremely useful. This vintage analysis can add a new perspective to risk management, supplementing more established analysis techniques, and further enhancing the ability to see the risk within the risk. Purchase a complete picture of consumer credit trends from Experian’s database of over 230 million consumers with the Market Intelligence Brief.

Published: November 2, 2009 by Kelly Kent

By: Kennis Wong In Part 1 of Generic fraud score, we emphasized the importance of a risk-based approach when it comes to fraud detection. Here are some further questions you may want to consider. What is the performance window? When a model is built, it has a defined performance window. That means the score is predicting a certain outcome within that time period. For example, a traditional risk score may be predicting accounts that are decreasing in twenty-four months. That score may not perform well if your population typically worsens in two months. This question is particularly important when it relates to scoring your population. For example, if a bust-out score has a performance window of three months, and you score your accounts at the time of acquisition, it would only catch accounts that are busting-out within the next three months. As a result, you should score your accounts during periodic account reviews in addition to the time of acquisition to ensure you catch all bust-outs.  Therefore, bust out fraud is an important indicator. Which accounts should I score? While it’s typical for creditors to use a fraud score on every applicant at the time of acquisition, they may not score all their accounts during review. For example, they may exclude inactive accounts or older accounts assuming those with a long history means less likelihood of fraud. This mistake may be expensive. For instance, the typical bust-out behavior is for fraudsters to apply for cards way before they intend to bust out. This may be forty-eight months or more. So when you think they are good and profitable customers, they can strike and leave you with seriously injury. Make sure that your fraud database is updated and accurate.  As a result, the recommended approach is to score your entire portfolio during account review. How often do I validate the score? The answer is very often -- this may be monthly or quarterly. You want to understand whether the score is working for you – do your actual results match the volume and risk projections? Shifts of your score distribution will almost certainly occur over time. To meet your objectives over the long run, continue to monitor and adjust cutoffs.  Keep your fraud database updated at all times.    

Published: October 12, 2009 by Guest Contributor

When reviewing offers for prospective clients, lenders often deal with a significant amount of missing information in assessing the outcomes of lending decisions, such as: Why did a consumer accept an offer with a competitor? What were the differentiating factors between other offers and my offer, i.e. what were their credit score trends? What happened to consumers that we declined? Do they perform as expected or better than anticipated? What were their credit risk models? While lenders can easily understand the implications of the loans they have offered and booked with consumers, they often have little information about two important groups of consumers: 1. Lost leads: consumers to whom they made an offer but did not book 2. Proxy performance: consumers to whom financing was not offered, but where the consumer found financing elsewhere. Performing a lost lead analysis on the applications approved and declined, can provide considerable insight into the outcomes and credit performance of consumers that were not added to the lender’s portfolio. Lost lead analysis can also help answer key questions for each of these groups: How many of these consumers accepted credit elsewhere? What were their credit attributes? What are the credit characteristics of the consumers we're not booking? Were these loans booked by one of my peers or another type of lender? What were the terms and conditions of these offers? What was the performance of the loans booked elsewhere? Who did they choose for loan origination? Within each of these groups, further analysis can be conducted to provide lenders with actionable feedback on the implications of their lending policies, possibly identifying opportunities for changes to better fulfill lending objectives. Some key questions can be answered with this information: Are competitors offering longer repayment terms? Are peers offering lower interest rates to the same consumers? Are peers accepting lower scoring consumers to increase market share? The results of a lost lead analysis can either confirm that the competitive marketplace is behaving in a manner that matches a lender’s perspective.  It can also shine a light into aspects of the market where policy changes may lead to superior results. In both circumstances, the information provided is invaluable in making the best decision in today’s highly-sensitive lending environment.

Published: October 11, 2009 by Kelly Kent

-- by Heather Grover I’m often asked in various industry forums to give talks about, or opinions on, the latest fraud trends and fraud best practices. Let’s face it –  fraudsters are students of their craft and continue to study the latest defenses and adapt to controls that may be in place. You may be surprised, then, to learn that our clients’ top-of-mind issues are not only how to fight the latest fraud trends, but how they can do so while maximizing use of automation, managing operational costs, and preserving customer experience -- all while meeting compliance requirements. Many times, clients view these goals as being unique goals that do not affect one another. Not only can these be accomplished simultaneously, but, in my opinion, they can be considered causal. Let me explain. By looking at fraud detection as its own goal, automation is not considered as a potential way to improve this metric. By applying analytics, or basic fraud risk scores, clients can easily incorporate many different potential risk factors into a single calculation without combing through various data elements and reports. This calculation or score can predict multiple fraud types and risks with less effort, than could a human manually, and subjectively reviewing specific results. Through an analytic score, good customers can be positively verified in an automated fashion; while only those with the most risky attributes can be routed for manual review. This allows expensive human resources and expertise to be used for only the most risky consumers. Compliance requirements can also mandate specific procedures, resulting in arduous manual review processes. Many requirements (Patriot Act, Red Flag, eSignature) mandate verification of identity through match results. Automated decisioning based on these results (or analytic score) can automate this process – in turn, reducing operational expense. While the above may seem to be an oversimplification or simple approach, I encourage you to consider how well you are addressing financial risk management.  How are you managing automation, operational costs, and compliance – while addressing fraud?  

Published: August 30, 2009 by Guest Contributor

By: Kari Michel This blog completes my discussion on monitoring new account decisions with a final focus: scorecard monitoring and performance.  It is imperative to validate acquisitions scorecards regularly to measure how well a model is able to distinguish good accounts from bad accounts. With a sufficient number of aged accounts, performance charts can be used to: • Validate the predictive power of a credit scoring model; • Determine if the model effectively ranks risk; and • Identify the delinquency rate of recently booked accounts at various intervals above and below the primary cutoff score. To summarize, successful lenders maximize their scoring investment by incorporating a number of best practices into their account acquisitions processes: 1. They keep a close watch on their scores, policies, and strategies to improve portfolio strength. 2. They create monthly reports to look at population stability, decision management, scoring models and scorecard performance. 3. They update their strategies to meet their organization’s profitability goals through sound acquisition strategies, scorecard monitoring and scorecard management.

Published: August 18, 2009 by Guest Contributor

There are a lot of areas covered in your comment: efficiency; credit quality (human side or character in an impersonal environment); and policy adherence. We define efficiency and effectiveness using these metrics: • Turnaround time from application submission to decision; • Resulting delinquencies based upon type of underwriting (centralized vs. decentralized); • Production levels between centralized and decentralized; • Performance of the portfolio based upon type of underwriting; and • Turnaround time from application submission to decision Due to the nature of Experian’s technology, we are able to capture start and stop times of the typical activities related to loan origination.  After analyzing the data from 160+ financial institutions of all sizes, Experian publishes an annual small business benchmark report that documents loan origination process efficiencies and inefficiencies, benchmarking these as industry standards. Turnaround Time From the benchmark report, we’ve seen that institutions that are centralized have consistently had a turnaround time that is half of those with decentralized environments. Interestingly, turnaround time is also much faster for the larger institutions than for smaller.  This is confusing because the smaller community banks tend to promote the close relationship they have with their clients and their communities. Yet, when it comes to actually making a loan decision, it tends to take longer. In addition to speed, another aspect of turnaround is consistency.  We all can think of situations where we were able to beat the stated turnaround times of the larger or the centralized institutions.  Unfortunately, these tend to be isolated instances versus the consistent performance that is delivered in the centralized environment. Resulting delinquencies based upon type of underwriting/Performance of the portfolio based upon type of underwriting Again, referring to the annual small business lending benchmark report, delinquencies in a centralized environment are 50% of those in a decentralized environment. I have worked with a number of institutions that allow the loan officer/relationship manager to “reverse the decision” made by a centralized underwriting group.  The thinking is that the human aspect is otherwise missing in centralized underwriting.  When the data is collected, though, the incremental business/portfolio that is approved by the loan officer (who is close to the client and knows the human side) is not profitable from a credit quality perspective.  Specifically, this incremental portfolio typically has a net charge-off rate that exceeds the net interest margin -- and this is before we even consider the non-interest expense incurred. Your choice: is the incremental business critical to your success…or could you more fruitfully direct your relationship officer’s attention elsewhere? Production levels between centralized and decentralized Not to beat a dead horse, but the multiple of two comes into play here too.  As one looks at the throughput of each role (data entry, underwriter, relationship manager/lender), the production levels of a centralized environment are typically double that of a decentralized. It’s clear that the data point to the efficiency and effectiveness of a centralized environment    

Published: August 7, 2009 by Guest Contributor

Put yourself in the shoes of your collections team. The year ahead is challenging. Workloads are increasing as consumer debt escalates, and collectors are working tiring, stressful shifts talking to people who don't want to talk about their debts.What kind of incentives can improve your collections performance and at the same time as create a well motivated and productive team?IntroductionFinancial incentives have long been a popular method to help boost staff performance. These rewards usually relate to the achievement of certain goals -- either personal, team, organizational or a combination of all three. A well-constructed incentive plan will increase staff morale and loyalty, as well as making a valuable difference to the bottom line. It can help ensure you are managing a team who are running at full speed and capability during these busy, turbulent times.However, collections managers can also implement alternative non-monetary incentive programs that can boost staff commitment and effectiveness.This series of postings identifies cash and non-cash alternatives that can help build and maintain a motivated team.Getting StartedBefore introducing a new incentive plan, clearly explain your objectives to the team. If your main goal is to maximize profitability, boost morale by letting your team know they are a major source of profit. Their understanding of how individual performance relates to the business will deepen their commitment to the program once it begins.To help you decide what to include in the incentive plan, you must first understand what drives your team. This should be ascertained by conducting regular performance appraisals, call monitoring, attitude surveys and informal conversations. Your staff will likely tell you that increased status and recognition, higher pay, better working conditions and improved benefits would increase both morale and performance. We can look into incentives that address these requirements individually, but let's begin with the most obvious: money.Money is a powerful motivatorThe current economic climate guarantees that money is more important to your team members than ever; they want to be financially rewarded for their efforts. In this industry, collectors work individually so it is wise to target them in this way when using financial incentives.Comparing individuals can also achieve higher performance levels because the cachet of being 'top dog' is a real motivator for some people.Our advice is to begin by targeting staff in three familiar areas and ensure from the start that your collections system delivers the depth and granularity of management information to support your incentive program.I would like to thank the Experian collections experts who contributed to this four-part series. The rest of the series will be posted soon! 

Published: August 6, 2009 by Guest Contributor

By: Tracy Bremmer In our last blog, we covered the first three stages of model development which are necessary whether developing a custom or generic model.  We will now discuss the next three stages, beginning with scorecard development. Scorecard development begins as segmentation analysis is taking place and any reject inference (if needed) is put into place. Considerations for scorecard development are whether the model will be binned (divides predictive attributes into intervals) or continuous (variable is modeled in its entirety), how to account for missing values (or “false zeros”), how to evaluate the validation sample (hold-out sample vs. an out-of-time sample), avoidance of over-fitting the model, and finally what statistics will be used to measure scorecard performance (KS, Gini coefficient, divergence, etc.). Many times lenders assume that once the scorecard is developed, the work is done.   However, the remaining two steps are critical to development and application of a predictive model:  implementation/documentation and scorecard monitoring.   Neglecting these two steps is like baking a cake but never taking a bite to make sure it tastes good. Implementation and documentation is the last stage in developing a model that can be put to use for enhanced decisioning. Where the model will be implemented will determine the timeliness and complexity for when the models can be put into practice. Models can be developed in an in-house system, a third-party processor, a credit reporting agency, etc. Accurate documentation outlining the specifications of the model will be critical for successful implementation and model audits. Scorecard monitoring will need to be put into place once the model is developed, implemented and put into use. Scorecard monitoring evaluates population stability, scorecard performance, and decision management to ensure that the model is performing as expected over the course of time. If at any time there are variations based on initial expectations, then scorecard monitoring allows for immediate modifications to strategies. With all the right ingredients, the right approach, and the checks and balances in place, your model development process has the potential to come out “just right!”  

Published: July 30, 2009 by Guest Contributor

Vintage analysis 101 The title of this edition, ‘The risk within the risk’ is a testament to the amount of information that can be gleaned from an assessment of the performances of vintage analysis pools. Vintage analysis pools offer numerous perspectives of risk. They allow for a deep appreciation of the effects of loan maturation, and can also point toward the impact of external factors, such as changes in real estate prices, origination standards, and other macroeconomic factors, by highlighting measurable differences in vintage to vintage performance. What is a vintage pool? By the Experian definition, vintage pools are created by taking a sample of all consumers who originated loans in a specific period, perhaps a certain quarter, and tracking the performance of the same consumers and loans through the life of each loan. Vintage pools can be analyzed for various characteristics, but three of the most relevant are: * Vintage delinquency, which allows for an understanding of the repayment trends within each pool; * Payoff trends, which reflect the pace at which pools are being repaid; and * Charge-off curves, which provide insights into the charge-off rates of each pool. The credit grade of each borrower within a vintage pool is extremely important in understanding the vintage characteristics over time, and credit scores are based on the status of the borrower just before the new loan was originated. This process ensures that the new loan origination and the performance of the specific loan do not influence the borrower’s credit score. By using this method of pooling and scoring, each vintage segment contains the same group of loans over time – allowing for a valid comparison of vintage pools and the characteristics found within. Once vintage pools have been defined and created, the possibilities for this data are numerous... Read more about our analysis opportunities for vintage analysis and our recent findings on vintage analysis.  

Published: July 13, 2009 by Kelly Kent

-- by Jeff BernsteinSo, here I am with my first contribution to Experian Decision Analytics’ collections blog, and what I am discussing has practically nothing to do with analytics. But, it has everything to do with managing the opportunities to positively impact collections results and leveraging your investment in analytics and strategies, beginning with the most important weapon in your arsenal – collectors.Yes, I know it’s a bit unconventional for a solutions and analytics company to talk about something other than models; but the difference between mediocre results and optimization rests with your collectors and your organization’s ability to manage customer interactions.Let’s take a trip down memory lane and reminisce about one of the true landscape changing paradigm shifts in collections in recent memory – the use of skill models to become payment of choice.AT&T Universal Card was one of the first early adopters of a radical new approach towards managing an emerging Gen X debtor population during the early 1990s. Armed with fresh research into what influenced delinquent debtors into paying certain collectors while dogging others, they adopted what we called a “management systems” approach towards collections.They taught their entire collections team a new set of skills models that stressed bridging skills between the collector and the customer, thus allowing the collector to interact in a more collaborative, non-aggressive manner. The new approach enabled collectors to more favorably influence customer behavior, creating payment solutions collaboratively that allowed AT&T to become “payment of choice” when competing with other creditors competing for share of wallet.A new of set of skill metrics, which we now affectionately call our “dashboard,” were created to measure the effective use of the newly taught skill models, and collectors were empowered to own their own performance – and to leverage their team leader for coaching and skills development. Team developers, the new name for front line collection managers, were tasked with spending 40-50% or more of their time on developmental activities, using leadership skills in their coaching and development activities.  The game plan was simple.• Engage collectors with customer focused skills that influenced behavior and get paid sooner.• Empower collectors to take on the responsibility for their own development.• Make performance results visible top-to-bottom in the organization to stimulate competitiveness, leveraging our innate desire for recognition. • Make leaders accountable for continuous performance improvement of individuals and teams.It worked. AT&T Universal won the Malcom Baldrige National Quality Award in 1992 for its efforts in “delighting the customer” while driving their delinquencies and charge-offs to superior levels. A new paradigm shift was unleashed and spread like wildfire across the industry, including many of the major credit card issuers and top tier U.S. banks, and large retailers.Why do I bring this little slice of history up in my first blog?I see many banking and financial services companies across the globe struggle with more complex customer situations and harder collections cases -- with their attention naturally focused on tools, models, and technologies. As an industry, we are focused on early lifecycle treatment strategy, identifying current, non-delinquent customers who may be at-risk for future default, and triaging them before they become delinquent. Risk-based collections and segmentation is now a hot topic. Outsourcing and leveraging multiple, non-agent based contact channels to reduce the pressures on collection resources is more important than ever. Optimization is getting top billing as the next “thing.”What I don’t hear enough of is how organizations are engaged in improving the skills of collectors, and executing the right management systems approach to the process to extract the best performance possible from our existing resources. In some ways, this may be lost in the chaos of our current economic climate. With all the focus on analytics, segmentation, strategy and technology, the opportunity to improve operational performance through skill building and leadership may have taken a back seat.I’ve seen plenty of examples of organizations who have spent millions on analytical tools and technologies, improving portfolio risk strategy and targeting of the right customers for treatment. I’ve seen the most advanced dialer, IVR, and other contact channel strategies used successfully to obtain the highest right party contact rates and the lowest possible cost. Yet, with all of that focus and investment, I’ve seen these right party contacts mismanaged by collectors who were not provided with the optimal coaching and skills.With the enriched data available for decisioning, coupled with the amazing capabilities we have for real time segmentation, strategy scripting, context-sensitive screens, and rules-based workflow management in our next generation collections systems, we are at a crossroads in the evolution of collections.Let’s not forget some of the “nuts and bolts” that drive operational performance and ensure success.Something old can be something new. Examine your internal processes aimed at producing the best possible skills at all collector levels and ensure that you are not missing the easiest opportunity to improve your results. 

Published: July 13, 2009 by Guest Contributor

By: Tom Hannagan Some articles that I’ve come across recently have puzzled me. In those articles, authors use the terms “monetary base” and “money supply” synonymously -- but those terms are actually very different. The monetary base (currency plus Fed deposits) is a much smaller number than the money supply (M1). The huge change in the “base”, which the Fed did affect by adding $1T or so to infuse a lot of quick liquidity into the financial system late in 2007/early 2008, does not necessarily impact M1 (which includes the base plus all bank demand deposits) all that much in the short-term, and may impact it even less in the intermediate-term if the Fed reduces its holdings of securities.  Some are correct, of course, in positing that a rotation out of securities by the Fed will tend to put pressure on market rates. Some are equivocating the 2007 liquidity moves of the Fed, with a major monetary policy change. When the capital markets froze due to liquidity and credit risks in August/September of 2007, monetary policy was not the immediate risk, or even a consideration. Without the liquidity injections in that timeframe, monetary policy would have become less than an academic consideration. Tying the “constrained” (which actually was a slowdown in growth of) bank lending to bank reserves on account at the Fed I don’t think their Fed reserve balance was ever an issue for lending. Banks slowed down lending because the level of credit risk increased. Borrowers were defaulting. Bank deposit balances were actually increasing through the financial crisis. [See my Feb 26 and March 5 blogs] So, loan funding, at least from deposit sources was not the problem for most banks. Of course, for a small number of banks that had major securities losses, capital was being lost and therefore not available to back increased lending. But demand deposit balances were growing. Some authors are linking bank reserves to the ability of banks to raise liabilities, which makes little sense. Banks’ respective abilities to gather demand deposits (insured by the FDIC, at no small expense to the banks) was always wide open, and their ability to borrow funds is much more a function of asset quality (or net asset value) more than it relates their relatively small reserve balances at the Fed. These actions may result in high inflation levels and high interest rates -- but it will be because of poor Fed decisions in the future, not because of the Fed’s action of last year. It will also depend on whether the fiscal (deficit) actions of the government are: 1) economically productive and 2) tempered to a recovery, or not. I think that is a bigger macro-economic risk than Fed monetary policy. In fact, the only way bank executives can wisely manage the entity over an extended timeframe is to be able to direct resources across all possibilities on a risk-adjusted basis. The question isn’t whether risk-based pricing is appropriate for all lines of business, but rather how might or should it be applied. For commercial lending into the middle and corporate markets, there is enough money at stake to warrant evaluating each loan and deposit, as well as the status of the client relationship, on an individual basis. This means some form of simulation modeling by relationship managers on new sales opportunities (including renewals) and the model’s ready access to current data on all existing pieces of business with each relationship. [See my April 24 blog entry.] This process also implies the ability to easily aggregate the risk-return status of a group of related clients and to show lenders how their portfolio of accounts is performing on a risk-adjusted basis. This type of model-based analysis needs to be flexible enough to handle differing loan structures, easy for a lender to use and quick. The better models can perform such analysis in minutes. I’ve discussed the elements of such models in earlier posts. But, with small business and consumer lending there are other considerations that come into play. The principles of risk-based pricing are consistent across any loan or deposit. With small business lending, the process of selling, negotiating, underwriting and origination is significantly more streamlined and under some form of workflow control. With consumer lending, there are more regulations to take into account and there are mass marketing considerations driving the “sales” process. Agreement covers what the new owner wants now and may decide it wants in the future. This a form of strategic business risk that comes with accepting the capital infusion from this particular source.  

Published: June 30, 2009 by Guest Contributor

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe