Uncategorized

Loading...

In today’s age of digital transformation, consumers have easy access to a variety of innovative financial products and services. From lending to payments to wealth management and more, there is no shortage in the breadth of financial products gaining popularity with consumers. But one market segment in particular – unsecured personal loans – has grown exceptionally fast. According to a recent Experian study, personal loan originations have increased 97% over the past four years, with fintech share rapidly increasing from 22.4% of total loans originated to 49.4%. Arguably, the rapid acceleration in personal loans is heavily driven by the rise in digital-first lending options, which have grown in popularity due to fintech challengers. Fintechs have earned their position in the market by leveraging data, advanced analytics and technology to disrupt existing financial models. Meanwhile, traditional financial institutions (FIs) have taken notice and are beginning to adopt some of the same methods and alternative credit approaches. With this evolution of technology fused with financial services, how are fintechs faring against traditional FIs? The below infographic uncovers industry trends and key metrics in unsecured personal installment loans: Still curious? Click here to download our latest eBook, which further uncovers emerging trends in personal loans through side-by-side comparisons of fintech and traditional FI market share, portfolio composition, customer profiles and more. Download now  

Published: September 17, 2019 by Brittany Peterson

Earlier this year, the Consumer Financial Protection Bureau (CFPB) issued a Notice of Proposed Rulemaking (NPRM) to implement the Fair Debt Collection Practices Act (FDCPA). The proposal, which will go into deliberation in September and won't be finalized until after that date at the earliest, would provide consumers with clear-cut protections against disturbance by debt collectors and straightforward options to address or dispute debts. Additionally, the NPRM would set strict limits on the number of calls debt collectors may place to reach consumers weekly, as well as clarify how collectors may communicate lawfully using technologies developed after the FDCPA’s passage in 1977. So, what does this mean for collectors? The compliance conundrum is ever present, especially in the debt collection industry. Debt collectors are expected to continuously adapt to changing regulations, forcing them to spend time, energy and resources on maintaining compliance. As the most recent onslaught of developments and proposed new rules have been pushed out to the financial community, compliance professionals are once again working to implement changes. According to the Federal Register, here are some key ways the new regulation would affect debt collection: Limited to seven calls: Debt collectors would be limited to attempting to reach out to consumers by phone about a specific debt no more than seven times per week. Ability to unsubscribe: Consumers who do not wish to be contacted via newer technologies, including voicemails, emails and text messages must be given the option to opt-out of future communications. Use of newer technologies: Newer communication technologies, such as emails and text messages, may be used in debt collection, with certain limitations to protect consumer privacy. Required disclosures: Debt collectors will be obligated to send consumers a disclosure with certain information about the debt and related consumer protections. Limited contact: Consumers will be able to limit ways debt collectors contact them, for example at a specific telephone number, while they are at work or during certain hours. Now that you know the details, how can you prepare? At Experian, we understand the importance of an effective collections strategy. Our debt collection solutions automate and moderate dialogues and negotiations between consumers and collectors, making it easier for collection agencies to reach consumers while staying compliant. Powerful locating solution: Locate past-due consumers more accurately, efficiently and effectively. TrueTraceSM adds value to each contact by increasing your right-party contact rate. Exclusive contact information: Mitigate your compliance risk with a seamless and unparalleled solution. With Phone Number IDTM, you can identify who a phone is registered to, the phone type, carrier and the activation date. If you aren’t ready for the new CFPB regulation, what are you waiting for? Learn more Note: Click here for an update on the CFPB's proposal.

Published: August 19, 2019 by Laura Burrows

It's been over 10 years since the start of the Great Recession. However, its widespread effects are still felt today. While the country has rebounded in many ways, its economic damage continues to influence consumers. Discover the Great Recession’s impact across generations: Americans of all ages have felt the effects of the Great Recession, making it imperative to begin recession proofing and better prepare for the next economic downturn. There are several steps your organization can take to become recession resistant and help your customers overcome personal financial difficulties. Are you ready should the next recession hit? Get started today

Published: July 22, 2019 by Laura Burrows

  You can do everything you can to prepare for the unexpected. But similar to how any first-time parent feels… you might need some help. Call in the grandparents! Experian has extensive expertise and has been around for a long time in the industry, but unlike your traditional grandparents, Experian continuously innovates, researches trends, and validates best practices in fraud and identity verification. That’s why we explored two prominent fraud reports, Javelin’s 2019 Identity Fraud Study: Fraudsters Seek New Targets and Victims Bear the Brunt and Experian’s 2019 Global Identity and Fraud Report — Consumer trust: Building meaningful relationships online, to help you identify and respond to new trends surrounding fraud. What we found – and what you need to know – is there are trends, technology and tactics that can help and hinder your fraud-prevention efforts. Consider the many digital channels available today. A full 91 percent of consumers transacted online in 2018. This presents a great opportunity for businesses to serve and develop relationships with customers. It also presents a great opportunity for fraudsters as well – as almost half of consumers have experienced a fraudulent online event. Since the threat of fraud is not impacting customers’ willingness to transact online, businesses are held responsible for adapting and evolving to not only protect their customers, but to secure their bottom line. This becomes increasingly important as fraudsters continue to target and expose vulnerabilities across inexperienced lines of businesses. Or, how about passwords. Research has shown that both businesses and consumers have greater confidence in biometrics, but neither is ready to stop using passwords. The continued reliance on traditional authentication methods is a delicate balance between security, trust and convenience. Passwords provide both authentication and consumer confidence in the online experience. It also adds friction to the user experience – and sometimes aggravation when passwords are forgotten. Advanced methods, like physical and behavioral biometrics and device intelligence, are gaining user confidence by both businesses and consumers. But a completely frictionless authentication experience can leave consumers doubting the safeness of their transaction. As you respond and adapt to our ever-evolving world, we encourage you to build and strengthen a trusted relationship with your customers through transparency. Consumers know that businesses are collection data about them. When a business is transparent about the use of that data, digital trust and consumer confidence soars. Through a stronger relationship, customers are more willing to accept friction and need fewer signs of security. Learn more about these and other trends, technology and tactics that can help and hinder your authentication efforts in our new E-book, Upcoming fraud trends and how to combat them.

Published: July 11, 2019 by Guest Contributor

Alex Lintner, Group President at Experian, recently had the chance to sit down with Peter Renton, creator of the Lend Academy Podcast, to discuss alternative credit data,1 UltraFICO, Experian Boost and expanding the credit universe. Lintner spoke about why Experian is determined to be the leader in bringing alternative credit data to the forefront of the lending marketplace to drive greater access to credit for consumers. “To move the tens of millions of “invisible” or “thin file” consumers into the financial mainstream will take innovation, and alternative data is one of the ways which we can do that,” said Lintner. Many U.S. consumers do not have a credit history or enough record of borrowing to establish a credit score, making it difficult for them to obtain credit from mainstream financial institutions. To ease access to credit for these consumers, financial institutions have sought ways to both extend and improve the methods by which they evaluate borrowers’ risk. By leveraging machine learning and alternative data products, like Experian BoostTM, lenders can get a more complete view into a consumer’s creditworthiness, allowing them to make better decisions and consumers to more easily access financial opportunities. Highlights include: The impact of Experian Boost on consumers’ credit scores Experian’s take on the state of the American consumer today Leveraging machine learning in the development of credit scores Expanding the marketable universe Listen now Learn more about alternative credit data 1When we refer to "Alternative Credit Data," this refers to the use of alternative data and its appropriate use in consumer credit lending decisions, as regulated by the Fair Credit Reporting Act. Hence, the term "Expanded FCRA Data" may also apply in this instance and both can be used interchangeably.

Published: July 1, 2019 by Laura Burrows

Financial institutions preparing for the launch of the Financial Accounting Standard Board’s (FASB) new current expected credit loss model, or CECL, may have concerns when it comes to preparedness, implications and overall impact. Gavin Harding, Experian’s Senior Business Consultant and Jose Tagunicar, Director of Product Management, tackled some of the tough questions posed by the new accounting standard. Check out what they had to say: Q: How can financial institutions begin the CECL transition process? JT: To prepare for the CECL transition process, companies should conduct an operational readiness review, which includes: Analyzing your data for existing gaps. Determining important milestones and preparing for implementation with a detailed roadmap. Running different loss methods to compare results. Once losses are calculated, you’ll want to select the best methodology based on your portfolio. Q: What is required to comply with CECL? GH: Complying with CECL may require financial institutions to gather, store and calculate more data than before. To satisfy CECL requirements, financial institutions will need to focus on end-to-end management, determine estimation approaches that will produce reasonable and supportable forecasts and automate their technology and platforms. Additionally, well-documented CECL estimations will require integrated workflows and incremental governance. Q: What should organizations look for in a partner that assists in measuring expected credit losses under CECL? GH: It’s expected that many financial institutions will use third-party vendors to help them implement CECL. Third-party solutions can help institutions prepare for the organization and operation implications by developing an effective data strategy plan and quantifying the impact of various forecasted conditions. The right third-party partner will deliver an integrated framework that empowers clients to optimize their data, enhance their modeling expertise and ensure policies and procedures supporting model governance are regulatory compliant. Q: What is CECL’s impact on financial institutions? How does the impact for credit unions/smaller lenders differ (if at all)? GH: CECL will have a significant effect on financial institutions’ accounting, modeling and forecasting. It also heavily impacts their allowance for credit losses and financial statements. Financial institutions must educate their investors and shareholders about how CECL-driven disclosure and reporting changes could potentially alter their bottom line. CECL’s requirements entail data that most credit unions and smaller lenders haven’t been actively storing and saving, leaving them with historical data that may not have been recorded or will be inaccessible when it’s needed for a CECL calculation. Q: How can Experian help with CECL compliance? JT: At Experian, we have one simple goal in mind when it comes to CECL compliance: how can we make it easier for our clients? Our Ascend CECL ForecasterTM, in partnership with Oliver Wyman, allows our clients to create CECL forecasts in a fraction of the time it normally takes, using a simple, configurable application that accurately predicts expected losses. The Ascend CECL Forecaster enables you to: Fulfill data requirements: We don’t ask you to gather, prepare or submit any data. The application is comprised of Experian’s extensive historical data, delivered via the Ascend Technology PlatformTM, economic data from Oxford Economics, as well as the auto and home valuation data needed to generate CECL forecasts for each unsecured and secured lending product in your portfolio. Leverage innovative technology: The application uses advanced machine learning models built on 15 years of industry-leading credit data using high-quality Oliver Wyman loan level models. Simplify processes: One of the biggest challenges our clients face is the amount of time and analytical effort it takes to create one CECL forecast, much less several that can be compared for optimal results. With the Ascend CECL Forecaster, creating a forecast is a simple process that can be delivered quickly and accurately. Q: What are immediate next steps? JT: As mentioned, complying with CECL may require you to gather, store and calculate more data than before. Therefore, it’s important that companies act now to better prepare. Immediate next steps include: Establishing your loss forecast methodology: CECL will require a new methodology, making it essential to take advantage of advanced statistical techniques and third-party solutions. Making additional reserves available: It’s imperative to understand how CECL impacts both revenue and profit. According to some estimates, banks will need to increase their reserves by up to 50% to comply with CECL requirements. Preparing your board and investors: Make sure key stakeholders are aware of the potential costs and profit impacts that these changes will have on your bottom line. Speak with an expert

Published: June 12, 2019 by Laura Burrows

You’ve Got Mail! Probably a lot of it. Birthday cards from Mom, a graduation announcement from your third cousin’s kid whose name you can’t remember and a postcard from your dentist reminding you you’re overdue for a cleaning. Adding to your pile, are the nearly 850 pieces of unsolicited mail Americans receive annually, according to Reader’s Digest. Many of these are pre-approval offers or invitations to apply for credit cards or personal loans. While many of these offers are getting to the right mailbox, they’re hitting a changing consumer at the wrong time. The digital revolution, along with the proliferation and availability of technology, has empowered consumers. They now not only have access to an abundance of choices but also a litany of new tools and channels, which results in them making faster, sometimes subconscious, decisions. Three Months Too Late The need to consistently stay in front of customers and prospects with the right message at the right time has caused a shortening of campaign cycles across industries. However, for some financial institutions, the customer acquisition process can take up to 120 days! While this timeframe is extreme, customer prospecting can still take around 45-60 days for most financial institutions and includes: Bureau processing: Regularly takes 10-15 days depending on the number of data sources and each time they are requested from a bureau. Data aggregation: Typically takes anywhere from 20-30 days. Targeting and selection: Generally, takes two to five days. Processing and campaign deployment: Usually takes anywhere from three days, if the firm handles it internally, or up to 10 days if an outside company handles the mailing. A Better Way That means for many firms, the data their customer acquisition campaigns are based off is at least 60 days old. Often, they are now dealing with a completely different consumer. With new card originations up 20% year-over-year in 2019 alone, it’s likely they’ve moved on, perhaps to one of your competitors. It’s time financial institutions make the move to a more modern form of prospecting and targeting that leverages the power of cloud technology, machine learning and artificial intelligence to accelerate and improve the marketing process. Financial marketing systems of the future will allow for advanced segmentation and targeting, dynamic campaign design and immediate deployment all based on the freshest data (no more than 24-48 hours old). These systems will allow firms to do ongoing analytics and modeling so their campaign testing and learning results can immediately influence next cycle decisions. Your customers are changing, isn’t it time the way you market to them changes as well?

Published: May 29, 2019 by Jesse Hoggard

Be warned. I’m a Philadelphia sports fan, and even after 13 months, I still relish in the only Super Bowl victory I’ve ever known as a fan. Having spent more than two decades in fraud prevention, I find that Super Bowl LII is coalescing in my mind with fraud prevention and lessons in defense more and more. Let me explain: It’s fourth-down-and-goal from the one-yard line. With less than a minute on the clock in the first half, the Eagles lead, 15 to 12. The easy option is to kick the field goal, take the three points and come back with a six-point advantage. Instead of sending out the kicking squad, the Eagles offense stays on the field to go for a touchdown. Broadcaster Cris Collingsworth memorably says, “Are they really going to go for this? You have to take the three!” On the other side are the New England Patriots, winners of two of the last three Super Bowls. Love them or hate them, the Patriots under coach Bill Belichick are more likely than any team in league history to prevent the Eagles from scoring at this moment. After the offense sets up, quarterback Nick Foles walks away from his position in the backfield to shout instructions to his offensive line. The Patriots are licking their chops. The play starts, and the ball is snapped — not to Foles as everyone expects, but to running back Corey Clement. Clement takes two steps to his left and tosses the ball the tight end Trey Burton, who’s running in the opposite direction. Meanwhile, Foles pauses as if he’s not part of the play, then trots lazily toward the end zone. Burton lobs a pass over pursuing defenders into Foles’ outstretched hands. This is the “Philly Special” — touchdown! Let me break this down: A third-string rookie running back takes the snap, makes a perfect toss — on the run — to an undrafted tight end. The tight end, who hasn’t thrown a pass in a game since college, then throws a touchdown pass to a backup quarterback who hasn’t caught a ball in any athletic event since he played basketball in high school. A play that has never been run by the Eagles, led by a coach who was criticized as the worst in pro football just a year before, is perfectly executed under the biggest spotlight against the most dominant team in NFL history. So what does this have to do with fraud? There’s currently an outbreak of breach-fueled credential stuffing. In the past couple of months, billions of usernames and passwords stolen in various high-profile data breaches have been compiled and made available to criminals in data sets described as “Collections 1 through 5.” Criminals acquire credentials in large numbers and attack websites by attempting to login with each set — effectively “stuffing” the server with login requests. Based on consumer propensity to reuse login credentials, the criminals succeed and get access to a customer account between 1 in 1,000 and 1 in 50 attempts. Using readily available tools, basic information like IP address and browser version are easy enough to alter/conceal making the attack harder to detect. Credential stuffing is like the Philly Special: Credential stuffing doesn’t require a group of elite all-stars. Like the Eagles’ players with relatively little experience executing their roles in the Philly Special, criminals with some computer skills, some initiative and the guts to try credential stuffing can score. The best-prepared defense isn’t always enough. The Patriots surely did their homework. They set up their defense to stop what they expected the Eagles to do based on extensive research. They knew the threats posed by every Eagle on the field. They knew what the Eagles’ coaches had done in similar circumstances throughout their careers. The defense wasn’t guessing. They were as prepared as they could have been. It’s the second point that worries me when I think of credential stuffing. Consumers reuse online credentials with alarming frequency, so a stolen set of credentials is likely to work across multiple organizations, possibly even yours. On top of that, traditional device recognition like cookies can’t identify and stop today’s sophisticated fraudsters. The best-prepared organizations feel great about their ability to stop the threats they’re aware of. Once they’ve seen a scheme, they make investments, improve their defenses, and position their players to recognize a risk and stop it. Sometimes past expertise won’t stop the play you can’t see coming.  

Published: March 28, 2019 by Chris Ryan

With scarce resources and limited experience available in the data science field, a majority of organizations are partnering with outside firms to fill gaps within their teams. A report compiled by Hexa Research found that the data analytics outsourcing market is set to expand at a compound annual growth rate of 30 percent between 2016 and 2024, reaching annual revenues of more than $6 billion. With data science becoming a necessity for success, outsourcing these specific skills will be the way of the future. When working with outside firms, you may be given the option between offshore and onshore resources. But how do you decide? Let’s discuss a few things you can consider. Offshore A well-known benefit of using offshore resources is lower cost. Offshore resources provide a larger pool of talent, which includes those who have specific analytical skills that are becoming rare in North America. By partnering with outside firms, you also expose your organization to global best practices by learning from external resources who have worked in different industries and locations. If a partner is investing research and development dollars into specific data science technology or new analytics innovations, you can use this knowledge and apply it to your business. With every benefit, however, there are challenges. Time zone differences and language barriers are things to consider if you’re working on a project that requires a large amount of collaboration with your existing team. Security issues need to be addressed differently when using offshore resources. Lastly, reputational risk also can be a concern for your organization. In certain cases, there may be a negative perception — both internally and externally — of moving jobs offshore, so it’s important to consider this before deciding. Onshore While offshore resources can save your organization money, there are many benefits to hiring onshore analytical resources. Many large projects require cross-functional collaboration. If collaboration is key to the projects you’re managing, onshore resources can more easily blend with your existing resources because of time zone similarities, reduced communication barriers and stronger cultural fit into your organization. In the financial services industry, there also are regulatory guidelines to consider. Offshore resources often may have the skills you’re looking for but don’t have a complete understanding of our regulatory landscape, which can lead to larger problems in the future. Hiring resources with this type of knowledge will help you conduct the analysis in a compliant manner and reduce your overall risk. All of the above Many of our clients — and we ourselves — find that an all-of-the-above approach is both effective and efficient. In certain situations, some timeline reductions can be made by having both onshore and offshore resources working on a project. Teams can include up to three different groups: Local resources who are closest to the client and the problem Resources in a nearby foreign country whose time zone overlaps with that of the local resources More analytical team members around the world whose tasks are accomplished somewhat more independently Carefully focusing on how the partnership works and how the external resources are managed is even more important than where they are located. Read 5 Secrets to Outsourcing Data Science Successfully to help you manage your relationship with your external partner. If your next project calls for experienced data scientists, Experian® can help. Our Analytics on DemandTM service provides senior-level analysts, either offshore  or onshore, who can help with analytical data science and modeling work for your organization.

Published: January 14, 2019 by Guest Contributor

Subprime originations hit the lowest overall share of the market seen in 11 years, but does that mean people are being locked out car ownership? Not necessarily, according to the Q3 State of the Automotive Finance Market report.To gain accurate insights from the vast amount of data available, it’s important to look at the entire picture that is created by the data. The decrease in subprime originations is due to many factors, one of which being that credit scores are increasing across the board (average is now 717 for new and 661 for used), which naturally shifts more consumers into the higher credit tiers. Loan origination market share are just one of the trends seen in this quarter’s report. Ultimately, examining the data can help inform lenders and help them make the right lending decisions. Exploring options for affordability While consumers analyze different possibilities to ensure their monthly payments are affordable, leasing is one of the more reasonable options in terms of monthly payments. In fact, the difference between the average new lease payment and new car payment usually averages more $100—and sometimes well over—which is a significant amount for the average American budget. In fact, leases of new vehicles are hovering around 30 percent, which is one of the factors that is aiding in new car sales. In turn, this then helps the used-vehicle market, as the high number of leases create a larger supply of quality use vehicles when they come off-lease and make their way back into the market. On-time payments continue to improve As consumer preferences continue to trend towards more expensive vehicles, such as crossovers, SUVs, and pickups, affordability will continue to be a topic of discussion. But consumers appear to be managing the higher prices, as in addition to the tactics mentioned above, 30- and 60-day delinquency rates declined since Q3 2017, from 2.39 percent to 2.23 percent and 0.76 percent to 0.72 percent, respectively. The automotive finance market is one where the old saying “no news is good news” continues to remain true. While there aren’t significant changes in the numbers quarter over quarter, this signals that the market is at a good place in its cycle. To learn more about the State of the Automotive Finance Market report, or to watch the webinar, click here.

Published: December 27, 2018 by Melinda Zabritski

Your model is only as good as your data, right? Actually, there are many considerations in developing a sound model, one of which is data. Yet if your data is bad or dirty or doesn’t represent the full population, can it be used? This is where sampling can help. When done right, sampling can lower your cost to obtain data needed for model development. When done well, sampling can turn a tainted and underrepresented data set into a sound and viable model development sample. First, define the population to which the model will be applied once it’s finalized and implemented. Determine what data is available and what population segments must be represented within the sampled data. The more variability in internal factors — such as changes in marketing campaigns, risk strategies and product launches — and external factors — such as economic conditions or competitor presence in the marketplace — the larger the sample size needed. A model developer often will need to sample over time to incorporate seasonal fluctuations in the development sample. The most robust samples are pulled from data that best represents the full population to which the model will be applied. It’s important to ensure your data sample includes customers or prospects declined by the prior model and strategy, as well as approved but nonactivated accounts. This ensures full representation of the population to which your model will be applied. Also, consider the number of predictors or independent variables that will be evaluated during model development, and increase your sample size accordingly. When it comes to spotting dirty or unacceptable data, the golden rule is know your data and know your target population. Spend time evaluating your intended population and group profiles across several important business metrics. Don’t underestimate the time needed to complete a thorough evaluation. Next, select the data from the population to aptly represent the population within the sampled data. Determine the best sampling methodology that will support the model development and business objectives. Sampling generates a smaller data set for use in model development, allowing the developer to build models more quickly. Reducing the data set’s size decreases the time needed for model computation and saves storage space without losing predictive performance. Once the data is selected, weights are applied so that each record appropriately represents the full population to which the model will be applied. Several traditional techniques can be used to sample data: Simple random sampling — Each record is chosen by chance, and each record in the population has an equal chance of being selected. Random sampling with replacement — Each record chosen by chance is included in the subsequent selection. Random sampling without replacement — Each record chosen by chance is removed from subsequent selections. Cluster sampling — Records from the population are sampled in groups, such as region, over different time periods. Stratified random sampling — This technique allows you to sample different segments of the population at different proportions. In some situations, stratified random sampling is helpful in selecting segments of the population that aren’t as prevalent as other segments but are equally vital within the model development sample. Learn more about how Experian Decision Analytics can help you with your custom model development needs.

Published: November 7, 2018 by Guest Contributor

As our society becomes ever more dependent on everything mobile, criminals are continually searching for and exploiting weaknesses in the digital ecosystem, causing significant harm to consumers, businesses and the economy.  In fact, according to our 2018 Global Fraud & Identity Report, 72 percent of business executives are more concerned than ever about the impact of fraud. Yet, despite the awareness and concern, 54 percent of businesses are only “somewhat confident” in their ability to detect fraud. That needs to change, and it needs to change right away.  Our industry has thrived by providing products and services that root out bad transactions and detect fraud with minimal consumer friction. We continue to innovate new ways to authenticate consumers, apply new cloud technologies, machine learning, self-service portals and biometrics. Yet, the fraud issue still exists. It hasn’t gone away. How do we provide effective means to prevent fraud without inconveniencing everyone in the process? That’s the conundrum. Unfortunately, a silver bullet doesn’t exist. As much as we would like to build a system that can detect all fraud, eliminate all consumer friction, we can’t. We’re not there yet. As long as money has changed hands, as long as there are opportunities to steal, criminals will find the weak points – the soft spots.  That said, we are making significant progress. Advances in technology and innovation help us bring new solutions to market more quickly, with more predictive power than ever, and the ability to help clients to turn  these services on in days and weeks. So, what is Experian doing? We’ve been in the business of fraud detection and identity verification for more than 30 years. We’ve seen fraud patterns evolve over time, and our product portfolio evolves in lock-step to counter the newest fraud vectors. Synthetic identity fraud, loan stacking, counterfeit, identity theft; the specific fraud attacks may change but our solution stack counters each of those threats. We are on a continuous innovation path, and we need to be. Our consumer and small business databases are unmatched in the industry for quality and coverage, and that is an invaluable asset in the fight against fraud. It used to be that knowing something about a person was the same as authenticating that same person. That’s just not the case today. But, just because I may not be the only person who knows where I live, doesn’t mean that identity information is obsolete. It is incredibly valuable, just in different ways today. And that’s where our scientists come into their own, providing complex predictive solutions that utilize a plethora of data and insight to create the ultimate in predictive performance. We go beyond traditional fraud detection methods, such as knowledge-based authentication, to offer a custom mix of passive and active authentication solutions that improve security and the customer experience. You want the latest deep learning techniques? We have them. You want custom models scored in milliseconds alongside your existing data requests. We can do that. You want a mix of cloud deployment, dedicated hosted services and on-premise? We can do that too. We have more than 20 partners across the globe, creating the most comprehensive identity management network anywhere. We also have teams of experts across the world with the know how to combine Experian and partner expertise to craft a bespoke solution that is unrivaled in detection performance. The results speak for themselves: Experian analyzes more than a billion credit applications per year for fraud and identity, and we’ve helped our clients save more than $2 billion in annual fraud losses globally. CrossCore™, our fraud prevention and identity management platform, leverages the full breadth of Experian data as well as the data assets of our partners. We execute machine learning models on every decision to help improve the accuracy and speed with which decisions are made. We’ve seen CrossCore machine learning result in a more than 40 percent improvement in fraud detection compared to rules-based systems. Our certified partner community for CrossCore includes only the most reputable leaders in the fraud industry. We also understand the need to expand our data to cover those who may not be credit active. We have the largest and most unique sets of alternative credit data among the credit bureaus, that includes our Clarity Services and RentBureau divisions. This rich data helps our clients verify an individual’s identity, even if they have a thin credit file. The data also helps us determine a credit applicant’s ability to pay, so that consumers are empowered to pursue the opportunities that are right for them. And in the background, our models are constantly checking for signs of fraud, so that consumers and clients feel protected. Fraud prevention and identity management are built upon a foundation of trust, innovation and keeping the consumer at the heart of every decision. This is where I’m proud to say that Experian stands apart. We realize that criminals will continue to look for new ways to commit fraud, and we are continually striving to stay one step ahead of them. Through our unparalleled scale of data, partnerships and commitment to innovation, we will help businesses become more confident in their ability to recognize good people and transactions, provide great experiences, and protect against fraud.

Published: November 6, 2018 by Steve Platt

In 2011, data scientists and credit risk managers finally found an appropriate analogy to explain what we do for a living. “You know Moneyball? What Paul DePodesta and Billy Beane did for the Oakland A’s, I do for XYZ Bank.” You probably remember the story: Oakland had to squeeze the most value out of its limited budget for hiring free agents, so it used analytics — the new baseball “sabermetrics” created by Bill James — to make data-driven decisions that were counterintuitive to the experienced scouts. Michael Lewis told the story in a book that was an incredible bestseller and led to a hit movie. The year after the movie was made, Harvard Business Review declared that data science was “the sexiest job of the 21st century.” Coincidence?   The importance of data Moneyball emphasized the recognition, through sabermetrics, that certain players’ abilities had been undervalued. In Travis Sawchik’s bestseller Big Data Baseball: Math, Miracles, and the End of a 20-Year Losing Streak, he notes that the analysis would not have been possible without the data. Early visionaries, including John Dewan, began collecting baseball data at games all over the country in a volunteer program called Project Scoresheet. Eventually they were collecting a million data points per season. In a similar fashion, credit data pioneers, such as TRW’s Simon Ramo, began systematically compiling basic credit information into credit files in the 1960s. Recognizing that data quality is the key to insights and decision-making and responding to the demand for objective data, Dewan formed two companies — Sports Team Analysis and Tracking Systems (STATS) and Baseball Info Solutions (BIS). It seems quaint now, but those companies collected and cleaned data using a small army of video scouts with stopwatches. Now data is collected in real time using systems from Pitch F/X and the radar tracking system Statcast to provide insights that were never possible before. It’s hard to find a news article about Game 1 of this year’s World Series that doesn’t discuss the launch angle or exit velocity of Eduardo Núñez’s home run, but just a couple of years ago, neither statistic was even measured. Teams use proprietary biometric data to keep players healthy for games. Even neurological monitoring promises to provide new insights and may lead to changes in the game. Similarly, lenders are finding that so-called “nontraditional data” can open up credit to consumers who might have been unable to borrow money in the past. This includes nontraditional Fair Credit Reporting Act (FCRA)–compliant data on recurring payments such as rent and utilities, checking and savings transactions, and payments to alternative lenders like payday and short-term loans. Newer fintech lenders are innovating constantly — using permissioned, behavioral and social data to make it easier for their customers to open accounts and borrow money. Similarly, some modern banks use techniques that go far beyond passwords and even multifactor authentication to verify their customers’ identities online. For example, identifying consumers through their mobile device can improve the user experience greatly. Some lenders are even using behavioral biometrics to improve their online and mobile customer service practices.   Continuously improving analytics Bill James and his colleagues developed a statistic called wins above replacement (WAR) that summarized the value of a player as a single number. WAR was never intended to be a perfect summary of a player’s value, but it’s very convenient to have a single number to rank players. Using the same mindset, early credit risk managers developed credit scores that summarized applicants’ risk based on their credit history at a single point in time. Just as WAR is only one measure of a player’s abilities, good credit managers understand that a traditional credit score is an imperfect summary of a borrower’s credit history. Newer scores, such as VantageScore® credit scores, are based on a broader view of applicants’ credit history, such as credit attributes that reflect how their financial situation has changed over time. More sophisticated financial institutions, though, don’t rely on a single score. They use a variety of data attributes and scores in their lending strategies. Just a few years ago, simply using data to choose players was a novel idea. Now new measures such as defense-independent pitching statistics drive changes on the field. Sabermetrics, once defined as the application of statistical analysis to evaluate and compare the performance of individual players, has evolved to be much more comprehensive. It now encompasses the statistical study of nearly all in-game baseball activities.   A wide variety of data-driven decisions Sabermetrics began being used for recruiting players in the 1980’s. Today it’s used on the field as well as in the back office. Big Data Baseball gives the example of the “Ted Williams shift,” a defensive technique that was seldom used between 1950 and 2010. In the world after Moneyball, it has become ubiquitous. Likewise, pitchers alter their arm positions and velocity based on data — not only to throw more strikes, but also to prevent injuries. Similarly, when credit scores were first introduced, they were used only in originations. Lenders established a credit score cutoff that was appropriate for their risk appetite and used it for approving and declining applications. Now lenders are using Experian’s advanced analytics in a variety of ways that the credit scoring pioneers might never have imagined: Improving the account opening experience — for example, by reducing friction online Detecting identity theft and synthetic identities Anticipating bust-out activity and other first-party fraud Issuing the right offer to each prescreened customer Optimizing interest rates Reviewing and adjusting credit lines Optimizing collections   Analytics is no substitute for wisdom Data scientists like those at Experian remind me that in banking, as in baseball, predictive analytics is never perfect. What keeps finance so interesting is the inherent unpredictability of the economy and human behavior. Likewise, the play on the field determines who wins each ball game: anything can happen. Rob Neyer’s book Power Ball: Anatomy of a Modern Baseball Game quotes the Houston Astros director of decision sciences: “Sometimes it’s just about reminding yourself that you’re not so smart.”  

Published: October 26, 2018 by Jim Bander

This is an exciting time to work in big data analytics. Here at Experian, we have more than 2 petabytes of data in the United States alone. In the past few years, because of high data volume, more computing power and the availability of open-source code algorithms, my colleagues and I have watched excitedly as more and more companies are getting into machine learning. We’ve observed the growth of competition sites like Kaggle, open-source code sharing sites like GitHub and various machine learning (ML) data repositories. We’ve noticed that on Kaggle, two algorithms win over and over at supervised learning competitions: If the data is well-structured, teams that use Gradient Boosting Machines (GBM) seem to win. For unstructured data, teams that use neural networks win pretty often. Modeling is both an art and a science. Those winning teams tend to be good at what the machine learning people call feature generation and what we credit scoring people called attribute generation. We have nearly 1,000 expert data scientists in more than 12 countries, many of whom are experts in traditional consumer risk models — techniques such as linear regression, logistic regression, survival analysis, CART (classification and regression trees) and CHAID analysis. So naturally I’ve thought about how GBM could apply in our world. Credit scoring is not quite like a machine learning contest. We have to be sure our decisions are fair and explainable and that any scoring algorithm will generalize to new customer populations and stay stable over time. Increasingly, clients are sending us their data to see what we could do with newer machine learning techniques. We combine their data with our bureau data and even third-party data, we use our world-class attributes and develop custom attributes, and we see what comes out. It’s fun — like getting paid to enter a Kaggle competition! For one financial institution, GBM armed with our patented attributes found a nearly 5 percent lift in KS when compared with traditional statistics. At Experian, we use Extreme Gradient Boosting (XGBoost) implementation of GBM that, out of the box, has regularization features we use to prevent overfitting. But it’s missing some features that we and our clients count on in risk scoring. Our Experian DataLabs team worked with our Decision Analytics team to figure out how to make it work in the real world. We found answers for a couple of important issues: Monotonicity — Risk managers count on the ability to impose what we call monotonicity. In application scoring, applications with better attribute values should score as lower risk than applications with worse values. For example, if consumer Adrienne has fewer delinquent accounts on her credit report than consumer Bill, all other things being equal, Adrienne’s machine learning score should indicate lower risk than Bill’s score. Explainability — We were able to adapt a fairly standard “Adverse Action” methodology from logistic regression to work with GBM. There has been enough enthusiasm around our results that we’ve just turned it into a standard benchmarking service. We help clients appreciate the potential for these new machine learning algorithms by evaluating them on their own data. Over time, the acceptance and use of machine learning techniques will become commonplace among model developers as well as internal validation groups and regulators. Whether you’re a data scientist looking for a cool place to work or a risk manager who wants help evaluating the latest techniques, check out our weekly data science video chats and podcasts.

Published: October 24, 2018 by Guest Contributor

Electric vehicles are here to stay – and will likely gain market share as costs reduce, travel ranges increase and charging infrastructure grows.

Published: October 24, 2018 by Brad Smith

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe