All posts by Guest Contributor

Loading...

-- by Jeff BernsteinSo, here I am with my first contribution to Experian Decision Analytics’ collections blog, and what I am discussing has practically nothing to do with analytics. But, it has everything to do with managing the opportunities to positively impact collections results and leveraging your investment in analytics and strategies, beginning with the most important weapon in your arsenal – collectors.Yes, I know it’s a bit unconventional for a solutions and analytics company to talk about something other than models; but the difference between mediocre results and optimization rests with your collectors and your organization’s ability to manage customer interactions.Let’s take a trip down memory lane and reminisce about one of the true landscape changing paradigm shifts in collections in recent memory – the use of skill models to become payment of choice.AT&T Universal Card was one of the first early adopters of a radical new approach towards managing an emerging Gen X debtor population during the early 1990s. Armed with fresh research into what influenced delinquent debtors into paying certain collectors while dogging others, they adopted what we called a “management systems” approach towards collections.They taught their entire collections team a new set of skills models that stressed bridging skills between the collector and the customer, thus allowing the collector to interact in a more collaborative, non-aggressive manner. The new approach enabled collectors to more favorably influence customer behavior, creating payment solutions collaboratively that allowed AT&T to become “payment of choice” when competing with other creditors competing for share of wallet.A new of set of skill metrics, which we now affectionately call our “dashboard,” were created to measure the effective use of the newly taught skill models, and collectors were empowered to own their own performance – and to leverage their team leader for coaching and skills development. Team developers, the new name for front line collection managers, were tasked with spending 40-50% or more of their time on developmental activities, using leadership skills in their coaching and development activities.  The game plan was simple.• Engage collectors with customer focused skills that influenced behavior and get paid sooner.• Empower collectors to take on the responsibility for their own development.• Make performance results visible top-to-bottom in the organization to stimulate competitiveness, leveraging our innate desire for recognition. • Make leaders accountable for continuous performance improvement of individuals and teams.It worked. AT&T Universal won the Malcom Baldrige National Quality Award in 1992 for its efforts in “delighting the customer” while driving their delinquencies and charge-offs to superior levels. A new paradigm shift was unleashed and spread like wildfire across the industry, including many of the major credit card issuers and top tier U.S. banks, and large retailers.Why do I bring this little slice of history up in my first blog?I see many banking and financial services companies across the globe struggle with more complex customer situations and harder collections cases -- with their attention naturally focused on tools, models, and technologies. As an industry, we are focused on early lifecycle treatment strategy, identifying current, non-delinquent customers who may be at-risk for future default, and triaging them before they become delinquent. Risk-based collections and segmentation is now a hot topic. Outsourcing and leveraging multiple, non-agent based contact channels to reduce the pressures on collection resources is more important than ever. Optimization is getting top billing as the next “thing.”What I don’t hear enough of is how organizations are engaged in improving the skills of collectors, and executing the right management systems approach to the process to extract the best performance possible from our existing resources. In some ways, this may be lost in the chaos of our current economic climate. With all the focus on analytics, segmentation, strategy and technology, the opportunity to improve operational performance through skill building and leadership may have taken a back seat.I’ve seen plenty of examples of organizations who have spent millions on analytical tools and technologies, improving portfolio risk strategy and targeting of the right customers for treatment. I’ve seen the most advanced dialer, IVR, and other contact channel strategies used successfully to obtain the highest right party contact rates and the lowest possible cost. Yet, with all of that focus and investment, I’ve seen these right party contacts mismanaged by collectors who were not provided with the optimal coaching and skills.With the enriched data available for decisioning, coupled with the amazing capabilities we have for real time segmentation, strategy scripting, context-sensitive screens, and rules-based workflow management in our next generation collections systems, we are at a crossroads in the evolution of collections.Let’s not forget some of the “nuts and bolts” that drive operational performance and ensure success.Something old can be something new. Examine your internal processes aimed at producing the best possible skills at all collector levels and ensure that you are not missing the easiest opportunity to improve your results. 

Published: July 13, 2009 by Guest Contributor

By: Tom Hannagan Some articles that I’ve come across recently have puzzled me. In those articles, authors use the terms “monetary base” and “money supply” synonymously -- but those terms are actually very different. The monetary base (currency plus Fed deposits) is a much smaller number than the money supply (M1). The huge change in the “base”, which the Fed did affect by adding $1T or so to infuse a lot of quick liquidity into the financial system late in 2007/early 2008, does not necessarily impact M1 (which includes the base plus all bank demand deposits) all that much in the short-term, and may impact it even less in the intermediate-term if the Fed reduces its holdings of securities.  Some are correct, of course, in positing that a rotation out of securities by the Fed will tend to put pressure on market rates. Some are equivocating the 2007 liquidity moves of the Fed, with a major monetary policy change. When the capital markets froze due to liquidity and credit risks in August/September of 2007, monetary policy was not the immediate risk, or even a consideration. Without the liquidity injections in that timeframe, monetary policy would have become less than an academic consideration. Tying the “constrained” (which actually was a slowdown in growth of) bank lending to bank reserves on account at the Fed I don’t think their Fed reserve balance was ever an issue for lending. Banks slowed down lending because the level of credit risk increased. Borrowers were defaulting. Bank deposit balances were actually increasing through the financial crisis. [See my Feb 26 and March 5 blogs] So, loan funding, at least from deposit sources was not the problem for most banks. Of course, for a small number of banks that had major securities losses, capital was being lost and therefore not available to back increased lending. But demand deposit balances were growing. Some authors are linking bank reserves to the ability of banks to raise liabilities, which makes little sense. Banks’ respective abilities to gather demand deposits (insured by the FDIC, at no small expense to the banks) was always wide open, and their ability to borrow funds is much more a function of asset quality (or net asset value) more than it relates their relatively small reserve balances at the Fed. These actions may result in high inflation levels and high interest rates -- but it will be because of poor Fed decisions in the future, not because of the Fed’s action of last year. It will also depend on whether the fiscal (deficit) actions of the government are: 1) economically productive and 2) tempered to a recovery, or not. I think that is a bigger macro-economic risk than Fed monetary policy. In fact, the only way bank executives can wisely manage the entity over an extended timeframe is to be able to direct resources across all possibilities on a risk-adjusted basis. The question isn’t whether risk-based pricing is appropriate for all lines of business, but rather how might or should it be applied. For commercial lending into the middle and corporate markets, there is enough money at stake to warrant evaluating each loan and deposit, as well as the status of the client relationship, on an individual basis. This means some form of simulation modeling by relationship managers on new sales opportunities (including renewals) and the model’s ready access to current data on all existing pieces of business with each relationship. [See my April 24 blog entry.] This process also implies the ability to easily aggregate the risk-return status of a group of related clients and to show lenders how their portfolio of accounts is performing on a risk-adjusted basis. This type of model-based analysis needs to be flexible enough to handle differing loan structures, easy for a lender to use and quick. The better models can perform such analysis in minutes. I’ve discussed the elements of such models in earlier posts. But, with small business and consumer lending there are other considerations that come into play. The principles of risk-based pricing are consistent across any loan or deposit. With small business lending, the process of selling, negotiating, underwriting and origination is significantly more streamlined and under some form of workflow control. With consumer lending, there are more regulations to take into account and there are mass marketing considerations driving the “sales” process. Agreement covers what the new owner wants now and may decide it wants in the future. This a form of strategic business risk that comes with accepting the capital infusion from this particular source.  

Published: June 30, 2009 by Guest Contributor

By: Kari Michel Are you using scores to make new applicant decisions? Scoring models need to be monitored regularly to ensure a sound and successful lending program. Would you buy a car and run it for years without maintenance -- and expect it to run at peak performance? Of course not. Just like oil changes or tune-ups, there are several critical components that need to be addressed regarding your scoring models on a regular basis. Monitoring reports are essential for organizations to answer the following questions: • Are we in compliance? • How is our portfolio performing? • Are we making the most effective use of your scores? To understand how to improve your portfolio performance, you must have good monitoring reports. Typically, reports fall into one of three categories: (1) population stability, (2) decision management, (3) scorecard performance. Having the right information will allow you to monitor and validate your underwriting strategies and make any adjustments when necessary. Additionally, that information will let you know that your scorecards are still performing as expected. In my next blog, I will discuss the population stability report in more detail.

Published: June 30, 2009 by Guest Contributor

By: Tracy Bremmer It’s not really all about the credit score. Now don’t get me wrong, a credit score is a very important tool used in credit decision making; however there’s so much more that lenders use to say “accept” or “decline.” Many lenders segment their customer/prospect base prior to ever using the score. They use credit-related attributes such as, “has this consumer had a bankruptcy in the last two years?” or “do they have an existing mortgage account?” to segment out consumers into risk-tier buckets. Lenders also evaluate information from the application such as income or number of years at current residence. These types of application attributes help the lender gain insight that is not typically evaluated in the traditional risk score. For lenders who already have a relationship with a customer, they will look at their existing relationships with that customer prior to making a decision. They’ll look at things like payment history and current product mix to better understand who best to cross-sell, up-sell, or in today’s economy, down-sell. In addition, many lenders will run the applicant through some type of fraud database to ensure the person really is who they say they are. I like to think of the score as the center of the decision, with all of these other metrics as necessary inputs to the entire decision process. It is like going out for an ice cream sundae and starting with the vanilla and needing all the mix-ins to make it complete.

Published: June 21, 2009 by Guest Contributor

-- By Kari Michel What is your credit risk score?  Is it 300, 700, 900 or something in between?  In order to understand what it means, you need to know which score you are referencing.  Lenders use many different scoring models to determine who qualifies for a loan and at what interest rate. For example, Experian has developed many scores, such as VantageScore®.  Think of VantageScore® as just one of many credit scores available in the marketplace. While all credit risk models have the same purpose, to use credit information to assess risk, each credit model is unique in that each one has its own proprietary formula that combines and calculates various credit information from your credit report.  Even if lenders used the same credit risk score, the interpretation of risk depends on the lender, and their lending policies and criteria may vary. Additionally, each credit risk model has its own score range as well.  While the score range may be relatively similar to another score range, the meaning of the score may not necessarily be the same. For example, a 640 in one score may not mean the same thing or have the same credit risk as a 640 for another score.  It is also possible for two different scores to represent the same level of risk. If you have a good credit score with one lender, you will likely have a good score with other lenders, even if the number is different.

Published: June 16, 2009 by Guest Contributor

Back during World War I, the concept of “triage” was first introduced to the battlefield.  Faced with massive casualties and limited medical resources, a system was developed to identify and select those who most needed treatment and who would best respond to treatment.  Some casualties were tagged as terminal and received no aid; others with minimal injuries were also passed over.  Instead, medical staff focused their attentions on those who required their services in order to be saved.  These were the ones who needed and would respond to appropriate treatment.  Our clients realize that the collections battlefield of today requires a similar approach.  They have limited resources to face this mounting wave of delinquencies and charge offs.  They also realize that they can’t throw bodies at this problem. They need to work smarter and use data and decisioning more effectively to help them survive this collections efficiency battle. Some accounts will never “cure” no matter what you do.  Others will self-cure with minimal or no active effort. Taking the right actions on the right accounts, with the right resources, at the right time is best accomplished with advanced segmentation that employs behavioral scoring, bureau-based scores and other relevant account data. The actual data and scores that should be used depend on the situation and account status, and there is no one-size-fits-all approach.  

Published: May 29, 2009 by Guest Contributor

How is your financial institution/organization working to improve your collections work stream?What are some of your keys for collections efficiency?What tools do you use to manage your collections workflow?

Published: May 29, 2009 by Guest Contributor

In addition to behavioral models, collections and account management groups need the ability to implement collections workflow strategies in order to effectively handle and process accounts, particularly when the optimization of resources is a priority. While the behavioral models will effectively evaluate and measure the likelihood that an account will become delinquent or result in a loss, strategies are the specific actions taken, based on the score prediction, as well as other key information that is available when those actions are appropriate. Identifying high-risk accounts, for example, may result in strategies designed to accelerate collections management activity and execute more aggressive actions. On the other hand, identifying low-risk accounts can help determine when to take advantage of cost-saving actions and focus on customer retention programs.  Effective strategies also address how to handle accounts that fall between the high- and low-risk extremes, as well as accounts that fall into special categories such as first payment defaults, recently delinquent accounts and unique customer or product segments. To accommodate lenders with systems that cannot support either behavioral scorecards or strategies, Experian developed the powerful service bureau solution, Portfolio Management Package, which is also referred to as PMP. To use this service, lenders send Experian customer master file data on a daily basis. Experian processes the data through the Portfolio Management Package system which includes calculating Fast Start behavior scores and identifying special handling accounts and electronically delivers the recommended strategies and actions codes within hours. Scoring and strategy parameters can be easily changed, as well as portfolio segmentation, special handling options and scorecard selections. PMP also supports Champion Challenger testing to enable users to learn which strategies are most effective. Comprehensive reports suites provide the critical information needed for lenders to design strategies and evaluate and compare the performance of those strategies.  

Published: May 22, 2009 by Guest Contributor

Optimization is a very broad and commonly used term today and the exact interpretation is typically driven by one's industry experience and exposure to modern analytical tools. Webster defines optimize as: "to make as perfect, effective or functional as possible". In the risk/collections world, when we want to optimize our strategies as perfect as technology will allow us, we need to turn to advanced mathematical engineering. More than just scoring and behavioral trending, the most powerful optimization tools leverage all available data and consider business constraints in addition to behavioral propensities for collections efficiency and collections management. A good example of how this can be leveraged in collections is with letter strategies. The cost of mailing letters is often a significant portion of the collections operational budget. After the initial letter required by the Fair Debt Collection Practice Act (FDCPA) has been sent, the question immediately becomes: “What is the best use of lettering dollars to maximize return?” With optimization technology we can leverage historical response data while also considering factors such as the cost of each letter, performance of each letter variation and departmental budget constraints, while weighing the alternatives to determine the best possible action to take for each individual customer. n short, cutting edge mathematical optimization technology answers the question: "Where is the point of diminishing return between collections treatment effectiveness and efficiency / cost?"  

Published: May 14, 2009 by Guest Contributor

Currently, financial institutions focus on the existing customer base and prioritize collections to recover more cash, and do it faster. There is also a need to invest in strategic projects with limited budgets in order to generate benefits in a very short term, to rationalize existing strategies and processes while ensuring that optimal decisions are made at each client contact point. To meet the present challenging conditions, financial institutions increasingly are performing business reviews with the goal of evaluating needs and opportunities to maximize the value created in their portfolios.  Business reviews assess an organization’s capacity to leverage on existing opportunities as well as identifying any additional capability that might be necessary to realize the increased benefits. An effective business review covers the following four phases: Problem definition: Establish and qualify what the key objectives of the organization are, the most relevant issues to address, the constraints of the solution, the criteria for success and to summarize how value management fits into the company’s corporate and business unit strategies. Benchmark against leading practice: Strategies, processes, tools, knowledge, and people have to be measured using a review toolset tailored to the organization’s strategic objectives. Define the opportunities and create the roadmap: The elements required to implement the opportunities and migrating to the best practice should be scheduled in a phased strategic roadmap that includes the implementation plan of the proposed actions. Achieve the benefits: An ROI-focused approach, founded on experience in peer organizations, will allow analysis of the cost-benefits of the recommended investments and quantify the potential savings and additional revenue generated. A continuous fine-tuning (i.e. impact of market changes, looking for the next competitive edge and proactively challenge solution boundaries) will ensure the benefits are fully achieved. Today’s blog is an extract of an article written by Burak Kilicoglu, an Experian Global Consultant To read the entire article in the April edition of Experian Decision Analytics’ global newsletter e-news, please follow the link below: http://www.experian-da.com/news/enews_0903/Story2.html  

Published: May 14, 2009 by Guest Contributor

By: Tom Hannagan As I'm preparing for traveling to the Baker Hill Solution Summit next week, I thought I would revisit the ideas of risk-based loan pricing. Risk Adjusted Loan Pricing – The Major Parts I have referred to risk-adjusted commercial loan pricing (or the lack of it) in previous posts. At times, I’ve commented on aspects of risk-based pricing and risk-based bank performance measurement,  but I haven’t discussed what risk-based pricing is -- in a comprehensive manner. Perhaps, I can begin to do that now, and in my next posts. Risk-based pricing analysis is a product-level microcosm of risk-based bank performance. You begin by looking at the financial implications of a product sale from a cost accounting perspective. This means calculating the revenues associated with a loan, including the interest income and any fee-based income. These revenues need to be spread over the life of the loan, while taking into account the amortization characteristics of the balance (or average usage for a line of credit). To save effort (and in providing good client relationship management), we often download the balance and rate information for existing loans from a bank’s loan accounting system. To “risk-adjust” the interest income, you need to apply a cost of funds that has the same implied market risk characteristics as the loan balance. This is not like the bank’s actual cost of funds for several reasons. Most importantly, there is usually no automatic risk-based matching between the manner in which the bank makes loans and the term characteristics of its deposits and/or borrowing. Once we establish a cost of funds approach that removes interest rate risk from the loan, we subtract the risk-adjusted interest expense from the revenues to arrive at risk-adjusted net interest income, or our risk-adjusted gross margin. We then subtract two types of costs. One cost includes the administrative or overhead expenses associated with the product. Our best practice is to derive an approach to operating expense breakdowns that takes into account all of the bank’s non-interest expenses. This is a “full absorption” method of cost accounting. We want to know the marginal cost of doing business, but if we just apply the marginal cost to all loans, a large portion of real-life expenses won’t be covered by resulting pricing. As a result, the bank’s profits may suffer. We fully understand the argument for marginal cost coverage, but have seen the unfortunate end-result of too many sales -- that use this lower cost factor -- hurt a bank’s bottom line. Administrative cost does not normally require additional risk adjustment, as any risk-based operational expenses and costs of mitigating operation risk are already included in the bank’s general ledger for non-interest expenses. The second expense subtracted from net interest income is credit risk cost. This is not the same as the bank’s provision expense, and is certainly not the same as the loss provision in any one accounting period.  The credit risk cost for pricing purposes should be risk adjusted based on both product type (usually loan collateral category) and the bank’s risk rating for the loan in question. This metric will calculate the relative probability of default for the borrower combined with the loss given default for the loan type in question. We usually annualize the expected loss numbers by taking into account a multi-year history and a one- or two-year projection of net loan losses. These losses are broken down by loan type and risk rating based on the bank’s actual distribution of loan balances. The risk costs by risk rating are then created using an up-sloping curve that is similar in shape to an industry default experience curve. This assures a realistic differentiation of losses by risk rating. Many banks have loss curves that are too flat in nature, resulting in little or no price differentiation based on credit quality. This leads to poor risk-based performance metrics and, ultimately, to poor overall financial performance. The loss expense curves are fine-tuned so that over a period of years the total credit risk costs, when applied to the entire portfolio, should cover the average annual expected loss experience of the bank. By subtracting the operating expenses and credit risk loss from risk-adjusted net interest income, we arrive at risk-adjusted pre-tax income. In my next post we’ll expand this discussion further to risk-adjusted net income, capital allocation for unexpected loss and profit ratio considerations.

Published: April 24, 2009 by Guest Contributor

1.       Portfolio Management – You should really focus on this topic in 2009.  With many institutions already streamlining the origination process, portfolio management is the logical next step.  While the foundation is based in credit quality, portfolio management is not just for the credit side.  2.       Review of Data (aka “Getting Behind the Numbers”) – We are not talking about scorecard validation; that’s another subject.  This is more general.  Traditional commercial lending rarely maintains a sophisticated database on its clients.  Even when it does, traditional commercial lending rarely analyzes the data.  3.       Lowering Costs of Origination – Always a shoe-in for a goal in any year!  But how does an institution make meaningful and marked improvements in reducing its costs of origination?  4.       Scorecard Validation – Getting more specific with the review of data.  Discuss the basic components of the validation process and what your institution can do to best prepare itself for analyzing the results of a validation.  Whether it be an interim validation or a full-sized one, put together the right steps to ensure your institution derives the maximum benefit from its scorecard. 5.       Turnaround Times (Response to Client) –Rebuild it.  Make the origination process better, stronger and faster.  No; we aren’t talking about bionics here -- nor how you can manipulate the metrics to report a faster turnaround time.  We are talking about what you can do from a loan applicant perspective to improve turnaround time. 6.       Training – Where are all the training programs?  Send in all the training programs!  Worry, because they are not here.  (Replace training programs with clowns and we might have an oldies song.)  Can’t find the right people with the right talent in the marketplace?  7.       Application Volume/Marketing/Relationship Management – You can design and execute the most efficient origination and portfolio management processes.   But, without addressing client and application volume, what good are they? 8.       Pricing/Yield on Portfolio – “We compete on service, not price.” We’ve heard this over and over again.  In reality, the sales side always resorts to price as the final differentiator.  Utilizing standardization and consistency can streamline your process and drive improved yields on your portfolio. 9.       Management Metrics – How do I know that I am going in the right direction?  Strategize, implement, execute, measure and repeat.  Learn how to set your targets to provide meaningful bottom line results. 10.    Operational Risk Management – Different from credit risk, operational risk and its management, operational risk management deals with what an institution should do to make sure it is not open to operational risk in the portfolio. Items totally in the control of the institution, if not executed properly, can cause significant loss. What do you think? As the end of April approaches, are these still hot topics in your financial institution?

Published: April 24, 2009 by Guest Contributor

The debate continues in the banking industry -- Do we push the loan authority to the field or do we centralize it (particularly when we are talking about small business loans)? A common argument for sending the loan authority to the field is the improved turnaround time for the applicant. However reality is that centralized loan authority actually provides a decision time almost two times faster than those of a decentralized nature.  The statistics supporting this fact are from the Small Business Benchmark Study created and published by Baker Hill, a Part of Experian, for the past five years. Based upon the 2008 Small Business Benchmark Study, those institutions with assets of $20 billion to $100 billion used only centralized underwriting and provided decisions within 2.5 days on average. In contrast, the next closest category ($2 billion to $20 billion in assets) took 4.4 days. Now, if we only consider the time it takes to make a decision (meaning we have all the information needed), the same disparity exists.  The largest banks using solely centralized underwriting took 0.8 days to make a decision, while the next tier ($2 billion to $20 billion) took an average 1.5 days to make a decision.  This drop in centralized underwriting usage between these two tiers was simply a 15 percent change. This means that the $20 billion to $100 billion banks had 100% usage of centralized underwriting while the $2 billion to $20 billion dropped only to 85% usage. Eighty-five percent is still a strong usage percentage, but it has a significant impact on turnaround time. The most perplexing issue is that the smaller community banks are consistently telling me that they feel their competitive advantages are that they can respond faster and they know their clients better than bigger, impersonal banks.  Based upon the stats, I am not seeing this competitive advantage supported by reality.  What is particularly confusing is that the small community banks, that are supposed to be closest to the client, take twice as long overall from application receipt to decision and almost three times as long when you compare them to the $20 billion to $100 billion category (0.8 days) to the $500 million to $2 billion category (2.2 days). As you can see - centralized underwriting works.  It is consistent, provides improved customer service, improved throughput, increased efficiency and improved credit quality when compared to the decentralized approach.   In future blogs, I will address the credit quality component.

Published: April 24, 2009 by Guest Contributor

The way in which you communicate with your customers really does impact the effectiveness of your collections operation. At the heart of any collections management operation is the quality of the correspondence and, in particular, the tone of voice adopted with the debtor. In short, what you say is important, but how you say it has a critical impact on its effectiveness. To help guide best practice in this area and provide areas for consideration when designing and implementing customer letters within a collections strategy, Experian commissioned a study to explore how consumers react to the words used to communicate with them about their debt. Key findings:An appropriate tone, clear detail of the consequences and a conciliatory approach are effective in the early phases of collection  Fees and charges and negative impacts on credit ratings were key motivators to pay Charges applied to an account for issuing a letter is disliked and likely to encourage many to contact the organisation to express their frustration After 3 months a strong emphasis on serious action is appropriate, including reference to legal action or debt collection agency involvement  Support should be offered, wherever possible, to aid those in difficulty  Letters should avoid an informal and patronising tone Lengthy letters have a low impact and are often not fully read, resulting in important messages being missed Use of red to highlight and focus on a specific point is effectiveUse of red to highlight more than one point is counter-effective To download the entire paper* and view other best practice briefings, follow the link below to the global Experian Decision Analytics collections briefing papers page: http://www.experian-da.com/resources/briefingpapers.html * Secure download account required. You can sign up for one today - FREE.

Published: April 24, 2009 by Guest Contributor

2007 and 2008 saw a rapid change of consumer behaviors and it is no surprise to most collections professionals that the existing collections scoring models and strategies are not working as well as they used to. These tools and collections workflow practices were mostly built from historical behavioral and credit data and assume that consumers will continue to behave as they had in the past. We all know that this is not the case, with an example being prioritization of debt and repayment patterns. Its been assumed and validated for decades that consumers will let their credit card lines go before an auto loan and that the mortgage obligations would be the last trade to remain standing before bankruptcy. Today, that is certainly not the case and there are other significant behavior shifts that are contributing to today's weak business models.   There are at least three compelling reasons to believe now is the right time for updates: It appears that most of the consumer behavioral shift is over for collections. While economic recovery will take many years, more radical changes in the economy are unlikely. Most experts are calling for a housing bottom sometime in 2009 and there are already signs of hope on Wall Street.   What is built now shouldn't be obsolete next year. A slow economic recovery probably means that the life of new models will be fairly long and most consumers won't be able to improve their credit and collections scores anytime soon. Even after financial recovery (which at this point is not likely over the short term for many that are already in trouble), it can take two to seven years of responsible payment history before a risk assessment is improved.   We now have the data with which to make the updates. It takes six to12 months of stability to accumulate sufficient data for proper analysis and so far 2009 hasn't seen much behavioral volatility. Whether you build or buy, the process takes awhile, so if you still need a few more months of history in will be in hand when needed if the projects are kicked off soon.

Published: April 24, 2009 by Guest Contributor

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe