All posts by Mihail Blagoev, Solution Strategy Analyst, Global Identity & Fraud

Loading...

We explore four fraud trends likely to be influenced the most by GEN AI technology in 2024, and what businesses can do to prevent them. 2023: The rise of Generative AI 2023 was marked by the rise of Generative Artificial Intelligence (GEN AI), with the technology’s impact (and potential impact) reverberating across businesses around the world. 2023 also witnessed the democratisation of GEN AI, with its usage made publicly available through multiple apps and tools such as Open AI's Chat GPT and DALL·E, Google's Bard, Midjourney, and many others. Chat GPT even held the world record for the fastest growing application in history (until it was surpassed by Threads) after reaching 100 million users in January 2023, just less than 2 months after its launch. The profound impact of GEN AI on everyday life is also reflected in the 2023 Word of the Year (WOTY) lists published by some of the biggest dictionaries in the world. Merriam-Webster’s WOTY for 2023 was 'authentic'— a term that people are thinking about, writing about, aspiring to, and judging more than ever. It's also not a surprise that one of the other words outlined by the dictionary was 'deepfake', referencing the importance of GEN AI-inspired technology over the past 12 months. Among other dictionaries that publish WOTY lists, both Cambridge Dictionary and Dictionary.com chose 'hallucinate' - with new definitions of the verb describing false information produced by AI tools being presented as truth or fact. A finalist in the Oxford list was the word 'prompt', referencing the instructions that are given to AI algorithms to influence the content it generates. Finally, Collins English Dictionary announced 'AI' as their WOTY to illustrate the significance of the technology throughout 2023. GEN AI has many potential positive applications from streamlining business processes, providing creative support for various industries such as architecture, design, or entertainment, to significantly impacting healthcare or education. However, as signalled out by some of the WOTY lists, it also poses many risks. One of the biggest threats is its adoption by criminals to generate synthetic content that has the potential to deceive businesses and individuals. Unfortunately, easy-to-use, and widely available GEN AI tools have also created a low entrance point for those willing to commit illegal activities. Threat actors leverage GEN AI to produce convincing deepfakes that include audio, images, and videos that are increasingly sophisticated and practically impossible to differentiate from genuine content without the help of technology. They are also exploiting the power of Large Language Models (LLMs) by creating eloquent chatbots and elaborate phishing emails to help them steal important information or establish initial communication with their targets. GEN AI fraud trends to watch out for in 2024 As the lines between authentic and synthetic blur more than ever before, here are four fraud trends likely to be influenced most by GEN AI technology in 2024. A staggering rise in bogus accounts: (impacted by: deepfakes, synthetic PII)Account opening channels will continue to be impacted heavily by the adoption of GEN AI. As criminals try to establish presence in social media and across business channels (e.g., LinkedIn) in an effort to build trust and credibility to carry out further fraudulent attempts, this threat will expand way beyond the financial services industry. GEN AI technology continues to evolve, and with the imminent emergence of highly convincing real-time audio and video deepfakes, it will give fraudsters even better tools to attempt to bypass document verification systems, biometric and liveness checks. Additionally, they could scale their registration attempts by generating synthetic PII data such as names, addresses, emails, or national identification numbers. Persistent account takeover attempts carried out through a variety of channels: (impacted by: deepfakes, GEN AI generated phishing emails)The advancements in deepfakes present a big challenge to institutions with inferior authentication defenses. Just like with the account opening channel, fraudsters will take advantage of new developments in deepfake technology to try to spoof authentication systems with voice, images, or video deepfakes, depending on the required input form to gain access to an account. Furthermore, criminals could also try to fool customer support teams to help them regain access they claim to have lost. Finally, it's likely that the biggest threat would be impersonation attempts (e.g., criminals pretending to be representatives of financial institutions or law enforcement) carried out against individuals to try to steal access details directly from them. This could also involve the use of sophisticated GEN AI generated emails that look like they are coming from authentic sources. An influx of increasingly sophisticated Authorised Push Payment fraud attempts: (impacted by: deepfakes, GEN AI chatbots, GEN AI generated phishing emails)Committing social engineering scams has never been easier. Recent advancements in GEN AI have given threat actors a handful of new ways to deceive their victims. They can now leverage deepfake voices, images, and videos to be used in crimes such as romance scams, impersonation scams, investment scams, CEO fraud, or pig butchering scams. Unfortunately, deepfake technology can be applied to multiple situations where a form of genuine human interaction might be needed to support the authenticity of the criminals' claims. Fraudsters can also bolster their cons with GEN AI enabled chatbots to engage potential victims and gain their trust. If that isn’t enough, phishing messages have been elevated to new heights with the help of LLM tools that have helped with translations, grammar, and punctuation, making these emails look more elaborate and trustworthy than ever before. A whole new world of GEN AI Synthetic Identity: (impacted by: deepfakes, synthetic PII)This is perhaps the biggest fraud threat that could impact financial institutions for years to come. GEN AI has made the creation of synthetic identities easier and more convincing than ever before. GEN AI tools give fraudsters the ability to generate fake PII data at scale with just a few prompts. Furthermore, criminals can leverage fabricated deepfake images of people that never existed to create synthetic identities from entirely bogus content. Unfortunately, since synthetic identities take time to be discovered and are often wrongly classified as defaults, the effect of GEN AI on this type of fraud will be felt for a long time. How to prevent GEN AI related fraud As GEN AI technology continues to evolve in 2024, its adoption by fraud perpetrators to carry out illegal activities will too. Institutions should be aware of the dangers they possess and equip themselves with the right tools and processes to tackle these risks. Here are a few suggestions on how this can be achieved: Fight GEN AI with GEN AI: One of the biggest advantages of GEN AI is that while it is being trained to create synthetic data, it can also be trained to spot it successfully. One such approach is supported by Generative Adversarial Networks (GANs) that employ two neural networks competing against each other — a generator and a discriminator. The generator creates synthetic data, while the discriminator evaluates the generated data and tries to distinguish between real and fake samples. Over time, both networks fine tune themselves, and the discriminator becomes increasingly successful in recognising synthetic content. Other algorithms used to create deepfakes, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Autoencoders, can also be trained to spot anomalies in audio, images, and video, such as inconsistencies in facial movements or features, inconsistencies in lighting or background, unnatural movements or flickering, and audio discrepancies. Finally, a hybrid approach that combines multiple algorithms often presents more robust results. Advanced analytics to monitor the whole customer journey and beyond: Institutions should deploy a fraud solution that leverages data from a variety of tools that can spot irregular activity across the whole customer journey. That could be a risky activity, such as a spike in suspicious registrations or authentication attempts, unusual consumer behaviour, irregular login locations, suspicious device or browser data, or abnormal transaction activity. A best-in-class solution would give institutions the ability to monitor and analyse trends that go beyond a single transaction or account. Ideally, that means monitoring for fraud signals happening both within a financial institution’s environment and across the industry. This should allow businesses to discover signals pointing out fraudulent activity previously not seen within their systems or data points that would otherwise be considered safe, thus allowing them to develop new fraud prevention models and more comprehensive strategies. Fraud data sharing: Sharing of fraud data across multiple organisations can help identify and spot new fraud trends from occurring within an instruction's premises and stop risky transactions early. Educate consumers: While institutions can deploy multiple tools to monitor GEN AI related fraud, regular consumers don't have the same advantage and are particularly susceptible to impersonation attempts, among other deepfake or GEN AI related cons. While they can't be equipped with the right tools to recognize synthetic content, educating consumers on how to react in certain situations related to giving out valuable personal or financial information is an important step in helping them to remain con free. Learn more with our latest fraud reports from across the globe: UK Fraud Report 2023 US Fraud Report 2023 EMEA + APAC Fraud Report 2023

Published: January 17, 2024 by Mihail Blagoev, Solution Strategy Analyst, Global Identity & Fraud

Authorised Push Payment fraud is growing, and as regulators begin to take action around the world to try to tackle it, we look at what financial institutions need to focus on now. APP fraud and social engineering scams In recent years, there has been a significant surge in reported instances of Authorized Push Payment Fraud (APP). These crimes, also known as financial scams, wire fraud scams, or social engineering scams in different parts of the world, refer to a type of fraud where criminals trick victims into authorising a payment to an account controlled by the fraud perpetrator for what the victim believes to be genuine goods or services in return for their money. Because the transactions made by the victim are usually done using a real-time payment scheme, they are often irrevocable. Once the fraudster receives the funds, they are quickly transferred through a series of mule accounts and withdrawn, often abroad. Because APP fraud often involves social engineering, it employs some of the oldest tricks in the criminal's book. These scams include tactics such as applying pressure on victims to make quick decisions, or enticing them with too-good-to-be-true schemes and tempting opportunities to make a fortune. Unfortunately, these tricks are also some of the most successful ones, and criminals have used them to their advantage more than ever in recent times. On top of that, with the widespread adoption of real-time payments, victims have the ability to transfer funds quickly and easily, making it much easier for criminals to take advantage of the process. APP Fraud and social engineering scams - cases and losses across the globe: View map Impact of AI on APP fraud Recent advancements in generative artificial intelligence (Gen AI) have accelerated the process used by fraudsters in APP fraud. Criminals use apps like Chat GPT and Bard to create more persuasive messages, or bot functionality offered by Large Language Models (LLMs) to engage their victims into romance scams and the more sophisticated pig butchering scams. Other examples include the use of face swapping apps or audio and video deepfakes that help fraudsters impersonate someone known to their victims, or create a fictitious personality that they believe to be a real person. Additionally, deepfake videos of celebrities have also been commonly used to trick victims into making an authorised transaction and lose substantial amounts of money. Unfortunately, while some of these hoaxes were really difficult to pull off a few years ago, the widespread availability of easy-to-use Gen AI technology tools has resulted in an increased number of attacks. A lot of these scams can be traced back to social media, where the initial communication between the victim and criminal takes place. According to UK Finance, 78% of APP fraud started online during the second half of 2022, and this figure was similar for the first half of 2023 at 77%. Fraudsters also use social media to research their victims which makes these attacks highly personalised due to the availability of data about potential targets. Accessible information often includes facts related to family members, things of personal significance like hobbies or spending habits, information about favourite holiday destinations, political views, or random facts like favourite foods and drink. On top of that, criminals use social media to gather photos and videos of potential targets or their family members that can later be leveraged to generate convincing deepfake content that includes audio, video, or images. These things combined contribute to a new, highly personalised approach to scams than has never been seen before. What regulators are saying around the globe APP fraud mitigation is a complex task that requires collaboration by multiple entities. The UK is by far the most advanced jurisdiction in terms of measures taken to tackle these types of fraud to help protect consumers. Some of the most important legislative changes that the UK’s Payment Systems Regulator (PSR) has proposed or introduced so far include: Mandatory reimbursement of APP scams victims: A world first mandatory reimbursement model will be introduced in 2024 to replace the previous voluntary reimbursement code which has been operational since 2019. 50/50 liability split: All payment firms will be incentivised to take action, with both sending and receiving firms splitting the costs of reimbursement 50:50. Publication of APP scams performance data: The inaugural report was released in October, showing for the first time how well banks and other payment firms performed in tackling APP scams and how they treated those who fell victim. Enhanced information sharing: Improved intelligence-sharing between PSPs so they can improve scam prevention in real time is expected to be implemented in early 2024. Because many of the scams start on social media or in fake advertisements, banks in the UK have made calls for the large tech firms (for example, Google, Facebook) and telcos to be included in the scam reimbursement process. As a first step to offer more protection for customers, in December 2022, the UK Parliament introduced a new Online Safety Bill that intends to make social media companies more responsible for their users’ safety by removing illegal content from their platforms. In November 2023, a world-first agreement to tackle online fraud was reached between the UK government and some of the leading tech companies - Amazon, eBay, Facebook, Google, Instagram, LinkedIn, Match Group, Microsoft, Snapchat, TikTok, X (Twitter) and YouTube. The intended outcome is for people across the UK to be protected from online scams, fake adverts and romance fraud thanks to an increased security measures that include better verification procedures and removal of any fraudulent content from these platforms. Outside of the UK, approaches to protect customers from APP fraud and social engineering scams are present in a few other jurisdictions. In the Netherlands, banks reimburse victims of bank impersonation scams when these are reported to the police and the victim has not been ‘grossly negligent.’ In the US, some banks provide voluntary reimbursement in cases of bank impersonation scams. As of June 2023, payment app Zelle, owned by seven US banks, has started refunding victims of impersonation scams, thus addressing earlier calls for action related to reported scams on the platform. In the EU, with the newly proposed Payment Services Directive (PSD3), issuers will also be liable when a fraudster impersonates a bank’s employee to make the user authenticate the payment (subject to filling in a police report and the payer not acting with gross negligence). In October 2023, the Monetary Authority of Singapore (MAS) proposed a new Shared Responsibility Framework that assigns financial institutions and telcos relevant duties to mitigate phishing scams and calls for payouts to be made to affected scam victims where these duties are breached. While this new proposal only includes unauthorised payments, it is unique because it is the first such official proposal that includes telcos in the reimbursement process. Earlier this year, the National Anti-Scam Centre in Australia, announced the start of an investment scam fusion cell to combat investment scams. The fusion cell includes representatives from banks, telcos, and digital platforms in a coordinated effort to identify methods for disrupting investment scams to minimise scam losses. To add to that, in November 2023, Australian banks announced the introduction of confirmation-of-payee system that is expected to help reduce scams by ensuring customers can confirm they are transferring money to the person they intend to, similarly to what has been done in the UK a few years ago. Finally, over the past few months, more jurisdictions such as Australia, Brazil, the EU and Hong Kong, have announced either proposals or the roll out of fraud data sharing schemes between banks and financial institutions. While not all of these schemes are directly tied to social engineering scams, they could be seen as a first step to tackle scams together with other types of fraud. While many jurisdictions beyond the UK are still in the early stages of the legislative process to protect consumers from scams, there is an expectation that regulatory changes that prove to be successful in the UK could be adopted elsewhere. This should help introduce better tracking of the problem, to stimulate collaboration between financial insitutions, and add visibility of financial instituitions efforts to prevent these types of fraud. As more countries introduce new regulations and more financial institutions start monitoring their systems for scams occurrences, the industry should be able to achieve greater success in protecting consumers and mitigating APP fraud and social engineering scams. How financial institutions can prevent APP fraud Changing regulations have initiated the first liability shifts towards financial institutions when it comes to APP fraud, making fraud prevention measures a greater area of concern for many leaders in the industry. Now the responsibility is spreading across both the sending and receiving payment provider, they also need to improve monitoring for incoming payments. What’s more, as these types of fraud are a global phenomenon, financial institutions from multiple jurisdictions might consider taking greater fraud prevention steps early on (before regulators impose any mandatory rules) to keep their customers safe and their reputation high. Here are five ways businesses can keep customers safe, while retaining brand reputation: Advanced analytics – advanced data analytics capabilities to create a 360° of individuals and their behaviour across all connected current accounts. This supports more sophisticated and effective fraud risk analysis that goes beyond a single transaction. Combining it with a view of fraudulent behaviours beyond the payment institution's premises by adding the ability to ingest data from multiple sources and develop models at scale allows businesses to monitor new fraud patterns and evolving threats. Behavioural biometrics – used to provide insights on indicators such as active mobile phone calls, session length, segmented typing, hesitation, and displacement to detect if the sender is receiving instructions over the phone or if they show unusual behaviour during the time of the transaction. Transaction monitoring and anomaly detection – required to monitor sudden spikes in transaction activity that are unusual for the sender of the funds as well as mule account activity on the receiving bank’s end. Fraud data sharing capabilities – sharing of fraud data across multiple organisations can help identify and stop risky transactions early, in addition to mitigation of mule activity and fraudulent new accounts opening. Monitoring of newly opened accounts – used to detect fake accounts or newly opened mule accounts. By leveraging a combination of these capabilities, financial institutions will be better prepared to cope with new regulations and support their customers in APP fraud. Identity & Fraud Report 2023 US Identity & Fraud Report 2023 UK Defeating Fraud Report 2023 EMEA & APAC

Published: December 5, 2023 by Mihail Blagoev, Solution Strategy Analyst, Global Identity & Fraud

Our latest Global Identity and Fraud Report reveals that fraud has been of high concern for consumers over the past year. In fact, more than half of consumers report that they are worried about online transactions, and 40% say that their concern has increased over this period. Data breaches, well-publicised scams, and direct first-hand experience with fraud have all contributed to these higher levels of concern. Our study shows that 77% of consumers had increased concern after experiencing online fraud, with more than half of consumers surveyed having had a close encounter with fraud: 58% of consumers say they have been a victim of online fraud, know someone who has been a victim, or both 57% of consumers say they have been a victim of identity theft, know someone who has been a victim, or both 53% of consumers say they have been a victim of account takeover, know someone who has been a victim, or both As a consequence, it makes sense that consumers rank security and privacy above convenience and personalisation when evaluating their online experience and expect businesses to take the necessary security steps to protect them online. We look at the main factors that play a role in the high levels of fraud concern among consumers and what businesses should do to address challenges in their fraud strategies. Three contributing factors to increased fraud concern among consumers Identity fraud has increased  Our research also unveils that identity theft has overtaken credit card theft as consumers’ biggest security worry across all age groups. Furthermore, a recent report from the UK showed that recorded cases of identity fraud have grown by 22% over the past year. Fraud prevention and security professionals have been trying to educate consumers for a long time on this topic. Stealing identity data and using it in multiple fraud schemes can be significantly more harmful than criminals having access to someone's credit card numbers, where transactions can be traced quickly and revoked or charged back. While many factors contributed to an increase in concern about identity theft, the most impactful over the past two years were the numerous cases of unemployment and benefits fraud. Multiple countries reported cases where criminals applied for loans in the name of genuine consumers or through synthetic identities, created by combining real stolen information with fake data. The cost of these scams is yet to be discovered, and it could take years to see their full effect, with fraud losses well into the billions (if not trillions) of dollars worldwide. Criminals can access stolen data and fraud tutorials beyond the dark web To commit many types of fraud, criminals need Personal Identifiable Information (PII) that is stolen through techniques such as hacking attacks, credential harvesting, credential stuffing, phishing, or other types of social engineering. For years the knowledge of how to do that, along with the stolen data available after a successful attack, was available mainly on cybercriminal forums accessed through the dark web. However, over the past year, it has become easier than ever to obtain not only PII data but also valuable information on how to bypass some of the security and fraud features in place for a certain institution. Criminals no longer need to go to the dark web to do that - it's available on platforms like Telegram, just a few clicks away, where other fraudsters are selling tutorials (often called 'Sauce') on how to commit fraud, as well as PII data (called 'Fullz') to achieve it. As a result, the entry level for those that want to commit fraud has been set lower than ever before - both in terms of skillset and accessibility. Phishing and scams are at all-time high Another contributing factor to the increase in consumer concern is the number of scams resulting in authorised push payment fraud, which totalled £583.2 million in the UK alone during 2021. Criminals continue to seek out consumer vulnerabilities and use a variety of tactics to apply pressure on their victims and convince them to transfer money out of their bank accounts. This could take many forms - from various types of impersonation scams, romance scams, and investment (fraud) opportunities, to scams related to utility bills and easy loan offers among other types. This wouldn't be possible without numerous phishing/smishing/vishing attempts and the amount of data available through data breaches. One other factor that helps criminals is the direct access to potential victims given by social media and the sheer volume of personal information available in the public domain. These types of scams sometimes get high publicity (and rightly so) which can also contribute to the increased level of concern among the public while also applying additional pressure on financial institutions to improve their fraud screening and transaction monitoring capabilities to protect consumers. How businesses can improve fraud screening capabilities and increase consumer trust To restore consumer trust, businesses need to look for ways to improve their capabilities both at account opening and login to prevent criminals from gaining easy access to their services. There are multiple ways to do that, from introducing online identity document verification or phone-centric identity verification capabilities at the account opening stage, to adding behavioural biometrics, device intelligence, or fraud data sharing capabilities during different stages of the customer journey. By introducing some of these capabilities businesses also can improve the digital customer journey for genuine consumers and increase trust. Online identity document verification and phone-centric identity verification solutions both offer pre-fill capabilities. These tools can streamline registration processes and thus contribute greatly to a positive consumer outlook of the company that offers them. While behavioural biometrics, device intelligence, and fraud data sharing tools are invisible to both fraudsters and genuine consumers creating a more frictionless experience. Businesses should look carefully at the fraud they are experiencing along with fraud trends shared by similar businesses. This should help inform whether to introduce new capabilities as part of the existing strategy. It's common that companies might need a mix of capabilities to mitigate fraud issues, with additional support from machine learning models to blend them into one cohesive output while limiting the number of false positives and building consumer trust. Stay in the know with our latest research and insights:

Published: August 9, 2022 by Mihail Blagoev, Solution Strategy Analyst, Global Identity & Fraud

Online fraud has increased at unprecedented levels over the past two and half years, with numerous reports coming from all corners of the world to confirm that. From benefits and unemployment fraud to authorised push payment fraud, and more advanced scams such as synthetic identity fraud and deepfake fraud, cybercrime has been on the rise. Understandably, the increase in criminal activity has had a significant impact on financial services businesses, and it is little wonder that this has been reflected in our recent study: • 48% of businesses reported that fraud is a high concern, and 90% reported fraud as a mid-to-high concern • 70% of businesses said their concern about fraud has increased since last year • 80% of businesses said that fraud is often or always discussed within their organisations High levels of fraud have also raised consumer concern, and their expectations of the protection businesses should offer them. Nearly three-quarters of consumers said that they expect businesses to take the necessary security steps to protect them online. However, only 23% of respondents were very confident that companies were taking steps to secure them online. Businesses need to take additional steps to meet consumer demand, while also protecting their reputation and revenue streams. Businesses are investing in fraud prevention, so why isn’t it working? As a result of the rise in fraud during the pandemic, there has been an increase in spending related to fraud prevention tools and technology, with 89% of businesses surveyed in our latest research indicating that investment in fraud detection software is important to them. However, there is a risk that institutions could take a siloed approach, and funds could be spent on point solutions that solve one or two problems without adding the needed flexibility to fight multiple attack patterns. This gives fraudsters the opportunity to exploit these gaps. Orchestration and automation drive fraudsters away Criminals constantly evolve. They are not new to technology and have multiple attack patterns that they can rely on. They also share information between themselves at a higher rate and pace when compared with financial institutions, banks, and merchants. Fraudsters can learn how to bypass one or two features in an organisation’s fraud prevention strategy if they recognise weak spots or a vulnerability that they can take advantage of. However, when multiple fraud prevention tools and capabilities work harmoniously against them, the chances are higher that they will eventually be blocked or forced to move to a weaker place where they can exploit another system. Synchronizing multiple solutions together is the key to excellent fraud orchestration Fraud orchestration platforms give businesses the chance to layer multiple solutions together. However, taking a layered approach is not only about piling multiple point solutions but also about synchronizing them to achieve the best output possible. Every solution looks at different signals and has its own way of scoring the events, which is why they need to be governed into a workflow to achieve the desired results. This means that institutions can control and optimize the order in which various solutions or capabilities are called, as the output of one solution could result in a different check for a subsequent one or even the need to trigger another solution altogether. It also gives companies the ability to preserve their user journeys while answering different risks presented to them. Some businesses are seeking to build trust with customers but want to stay invisible to remove friction from their digital customer experience. This is where capabilities such as device intelligence, behavioural biometrics, or fraud data sharing could be added as an additional layer in the fraud prevention strategy. Those additional solutions may only be called 30 per cent of the time when there is a real need for an additional check. Excellent orchestration means that organisations can rely on multiple solutions while only calling the services they need, exactly when they need them. Building trust through a secure but convenient customer experience. Machine Learning should be the final layer to rule them all The results from our research revealed the top initiatives that businesses are leveraging to improve the digital customer journey with the top two being: • Improving customer decisioning with AI • New AI models to improve decisioning While our April 2022 Global Insight Report showed that consumers are becoming more comfortable with AI, with 59% saying they trust organisations that use AI. Fraud orchestration platforms allow companies to deploy unified decisioning by leveraging machine learning (ML) on top of multiple fraud prevention tools. This means they can rely on one cohesive output instead of looking at separate, sometimes contradictory results across various platforms and making subjective decisions. ML can also offer explainability by pointing out the attributes that contributed the most to a particular suggestion or decision. These could be attributes coming from a few different tools instead of one. This also means that operational teams, like fraud investigators, have a single view of activity, resulting in operational efficiency - removing the need to log in to different tools and look at multiple screens, views, and scores, while also enabling faster decisions. Stay in the know with our latest research and insights:

Published: July 15, 2022 by Mihail Blagoev, Solution Strategy Analyst, Global Identity & Fraud

Since the Pandemic began consumers have been scammed more than ever before. From phishing emails, fake websites, and other scams intended to steal personal and financial information, to fake pharmaceutical goods or goods that never arrived, to account takeovers, multiple ways to defraud people have emerged or re-emerged at an alarming rate. It's an understatement to say that now more than ever customers need to be protected and it's the right time for businesses to improve some of their capabilities and offer their clients the secure experience that they expect. The results from our recent  global research study of changing behaviors and priorities throughout the pandemic show us just how important online security has become for consumers: Half of the consumers surveyed say they are very/somewhat concerned about conducting activities online; with the concern being most significant in India (69%) 4-in-10 consumers express increased levels of concern about online activities since C-19. The level of concern about consumer online activities and transactions has increased significantly since C-19 in India (61%), Brazil (57%), Singapore (53%), and US (44%) – one-fifth of consumers in the US and Brazil say that their level of concern has increased significantly. 42% feel that they are more of a target for online fraud now than before COVID-19, while only 25% feel safer about sharing personal information now than they did before COVID19 The largest sources of concern among consumers are credit card information being stolen (36%), online privacy (34%), identity theft (33%), and phishing email (32%). Consumers in India, Singapore, the US, and Brazil show generally more concern. Consumers have become increasingly positive towards more security measures One positive tendency that's been observed due to the increased security concerns is that consumers have become more comfortable with security measures being added online in order to protect them better: 55% percent of consumers expect more security steps when they are online and 49% want to have more visible security measures in place while on websites 47% of all consumers are expecting business to place strong security measures that they cannot see with another 40% expecting integration of features that recognize them during online purchasing without requiring them to share their personal data In fact, US consumers have increasing expectations on strong invisible securities (increased from 50% to 59% from June 2020 to January 2021) as well as identity authentication without sharing personal data (increased from 33% to 40% from June 2020 to January 2021) Consumers are accepting of biometrics and businesses should consider using it  It is not a surprise that fraud prevention methodologies such as physical biometrics (which is visible) and behavioral biometrics (which is invisible) have become more popular with the public. Both can be added as an extra layer in order to improve the authentication process by increasing its trustworthiness and efficacy. What’s also vital is that none of the two is compromising the user experience too much when compared with other more traditional authentication methodologies such as passwords or knowledge-based authentication: 74% of consumers are feeling very secure while using physical biometrics with another 16% feeling somewhat secure 66% of consumers are feeling very secure when being protected by behavioral biometrics with another 24% feeling somewhat secure So, as over half of businesses (55%) expect to increase their fraud management budgets in the next 6 months, it's recommended that they take notice of these trends and act now. What's even more important is that companies that invest in advanced customer authentication methods benefit from improved customer opinion, which feels like a win-win scenario for both parties involved: When physical biometrics are used, 57% of global consumers indicate that this enhances (somewhat/very much) their opinion of the organization When behavioral biometrics are used, 53% of global consumers say that they have a better opinion (somewhat/very much) about the organization implementing them Related stories: Global Insights Report Wave 3 (February 2021)  Global Insights Report Wave 1 (June/July 2020) What your customers say about opening new accounts online during Covid-19

Published: February 23, 2021 by Mihail Blagoev, Solution Strategy Analyst, Global Identity & Fraud

It’s not a surprise that we have seen an increase in digital activity during the pandemic. Lockdowns, store closures, various restrictions, and social distancing measures have led more people to the digital channel. In the months to come, it’s likely we’ll see more consumers adopt the digital channels as the world is going through a second wave of the COVID-19 pandemic.  Many people have realized how convenient, safe, and fast it is to conduct their activities online. Our recent global research study shows some key numbers on how consumer behavior has changed over the past several months and what the expectations are for the months to come: Consumers are currently being driven more to use online activities, with 61% stating that they are ordering food or shopping for groceries online 36% of consumers conduct their personal banking activities online 34% purchase clothing, electronics, or beauty and wellness products online More than 2-in-5 consumers anticipate increased spending on items purchased online; both in the next 3-6 months (46%) and longer-term (45%) An increase in digital activity creates more opportunities for fraudsters The digital channel is here to stay, and more and more customers will be opening new accounts online as a result. This, however, creates some challenges for the institutions trying to onboard new customers as it also gives criminals many opportunities to legally enroll with an organization. The increased online activity raises the likelihood that fraudsters will be able to hide better inside genuine traffic. At the same time, it also presents them with a great chance to take advantage of synthetic identities that have been carefully put together over a long period of time and are harder to spot now that more activities are handled online. Furthermore, fraudsters are engaging in human farming attacks with the intent of impersonating their victims better and navigating through security measures that are set up to detect bots and automated attacks. This means that businesses need to make sure that they know who they are engaging with online: their customer or a fraudster. Pay close attention to how you handle consumer onboarding and customer authentication Identity verification is one key area that should be carefully considered by institutions and merchants. For a long time, it’s been perceived as a process that adds unnecessary friction and might drive customers away. However, new tools and capabilities have been introduced recently that make it a much smoother process.  It not only reduces friction but it speeds up the onboarding of new customers through extracting data from identity documents and pre-filling registration forms. What might be even more exciting is that it can be combined with passive methodologies such as behavioral biometrics and device intelligence. Behavioral biometrics can help distinguish between normal and fraudulent activity at the sign-up stage, while device intelligence can be used to screen new customers for multiple fraud indicators. On top of that, businesses also need to continuously authenticate customers, in order to make sure that the person behind the screen is the same one that registered with them in the first place. Consumers will expect that any such interaction will be smooth and fast without unnecessary friction added to their online experience, such as re-entering the same personal information again and again. Nearly 30% of global consumers are only willing to wait up to 30 seconds before abandoning an online transaction and only 35% are willing to wait more than 1 minute, especially when accessing their bank accounts. More than half of consumers will abandon their basket if they are made to wait in the excess of 1 minute for online groceries 62% of consumers say biometrics enhances their experience and improves their opinion of a business Orchestration platforms could solve multiple problems So, on one side are criminals, who are always looking for system or process vulnerabilities and will not hesitate to exploit them, while on the other side consumers will be looking for a smooth online experience without any interruptions. This all means that account opening and continuous authentication (or re-authentication) should be looked at very seriously. 57% of businesses expect to increase their fraud management budgets in the next 6 months – this is highest in India (76%) followed by the U.S. (69%) –  and supporting or upgrading their onboarding and screening capabilities is necessary. It’s likely, though, that organizations won't be able to solve these problems with a single solution which is why orchestration platforms are becoming so popular and valuable. These platforms offer multiple verification capabilities at the account opening stage as well as continuous authentication using device intelligence and behavioral biometrics, which is further enriched by a layer of advanced analytics, e.g. machine learning. So, while more than 30% of businesses are focused exclusively on activities to generate revenue (over fraud detection), we encourage prioritizing both during the pandemic so acquiring new, authentic customers online will lead to greater trust and lifetime value

Published: November 20, 2020 by Mihail Blagoev, Solution Strategy Analyst, Global Identity & Fraud

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Quadrant 2023 SPARK Matrix