Loading...

Meeting the global challenge of APP fraud

Published: December 5, 2023 by Mihail Blagoev, Solution Strategy Analyst, Global Identity & Fraud

Authorised Push Payment fraud is growing, and as regulators begin to take action around the world to try to tackle it, we look at what financial institutions need to focus on now.

APP fraud

APP fraud and social engineering scams

In recent years, there has been a significant surge in reported instances of Authorized Push Payment Fraud (APP). These crimes, also known as financial scams, wire fraud scams, or social engineering scams in different parts of the world, refer to a type of fraud where criminals trick victims into authorising a payment to an account controlled by the fraud perpetrator for what the victim believes to be genuine goods or services in return for their money. Because the transactions made by the victim are usually done using a real-time payment scheme, they are often irrevocable. Once the fraudster receives the funds, they are quickly transferred through a series of mule accounts and withdrawn, often abroad.

Because APP fraud often involves social engineering, it employs some of the oldest tricks in the criminal’s book. These scams include tactics such as applying pressure on victims to make quick decisions, or enticing them with too-good-to-be-true schemes and tempting opportunities to make a fortune. Unfortunately, these tricks are also some of the most successful ones, and criminals have used them to their advantage more than ever in recent times. On top of that, with the widespread adoption of real-time payments, victims have the ability to transfer funds quickly and easily, making it much easier for criminals to take advantage of the process.

APP Fraud and social engineering scams – cases and losses across the globe:

Impact of AI on APP fraud

Recent advancements in generative artificial intelligence (Gen AI) have accelerated the process used by fraudsters in APP fraud. Criminals use apps like Chat GPT and Bard to create more persuasive messages, or bot functionality offered by Large Language Models (LLMs) to engage their victims into romance scams and the more sophisticated pig butchering scams. Other examples include the use of face swapping apps or audio and video deepfakes that help fraudsters impersonate someone known to their victims, or create a fictitious personality that they believe to be a real person. Additionally, deepfake videos of celebrities have also been commonly used to trick victims into making an authorised transaction and lose substantial amounts of money. Unfortunately, while some of these hoaxes were really difficult to pull off a few years ago, the widespread availability of easy-to-use Gen AI technology tools has resulted in an increased number of attacks.

A lot of these scams can be traced back to social media, where the initial communication between the victim and criminal takes place. According to UK Finance, 78% of APP fraud started online during the second half of 2022, and this figure was similar for the first half of 2023 at 77%. Fraudsters also use social media to research their victims which makes these attacks highly personalised due to the availability of data about potential targets. Accessible information often includes facts related to family members, things of personal significance like hobbies or spending habits, information about favourite holiday destinations, political views, or random facts like favourite foods and drink. On top of that, criminals use social media to gather photos and videos of potential targets or their family members that can later be leveraged to generate convincing deepfake content that includes audio, video, or images. These things combined contribute to a new, highly personalised approach to scams than has never been seen before.

What regulators are saying around the globe

APP fraud mitigation is a complex task that requires collaboration by multiple entities. The UK is by far the most advanced jurisdiction in terms of measures taken to tackle these types of fraud to help protect consumers. Some of the most important legislative changes that the UK’s Payment Systems Regulator (PSR) has proposed or introduced so far include:

  • Mandatory reimbursement of APP scams victims: A world first mandatory reimbursement model will be introduced in 2024 to replace the previous voluntary reimbursement code which has been operational since 2019.
  • 50/50 liability split: All payment firms will be incentivised to take action, with both sending and receiving firms splitting the costs of reimbursement 50:50.
  • Publication of APP scams performance data: The inaugural report was released in October, showing for the first time how well banks and other payment firms performed in tackling APP scams and how they treated those who fell victim.
  • Enhanced information sharing: Improved intelligence-sharing between PSPs so they can improve scam prevention in real time is expected to be implemented in early 2024.

Because many of the scams start on social media or in fake advertisements, banks in the UK have made calls for the large tech firms (for example, Google, Facebook) and telcos to be included in the scam reimbursement process. As a first step to offer more protection for customers, in December 2022, the UK Parliament introduced a new Online Safety Bill that intends to make social media companies more responsible for their users’ safety by removing illegal content from their platforms. In November 2023, a world-first agreement to tackle online fraud was reached between the UK government and some of the leading tech companies – Amazon, eBay, Facebook, Google, Instagram, LinkedIn, Match Group, Microsoft, Snapchat, TikTok, X (Twitter) and YouTube. The intended outcome is for people across the UK to be protected from online scams, fake adverts and romance fraud thanks to an increased security measures that include better verification procedures and removal of any fraudulent content from these platforms.

Outside of the UK, approaches to protect customers from APP fraud and social engineering scams are present in a few other jurisdictions. In the Netherlands, banks reimburse victims of bank impersonation scams when these are reported to the police and the victim has not been ‘grossly negligent.’ In the US, some banks provide voluntary reimbursement in cases of bank impersonation scams. As of June 2023, payment app Zelle, owned by seven US banks, has started refunding victims of impersonation scams, thus addressing earlier calls for action related to reported scams on the platform. In the EU, with the newly proposed Payment Services Directive (PSD3), issuers will also be liable when a fraudster impersonates a bank’s employee to make the user authenticate the payment (subject to filling in a police report and the payer not acting with gross negligence).

In October 2023, the Monetary Authority of Singapore (MAS) proposed a new Shared Responsibility Framework that assigns financial institutions and telcos relevant duties to mitigate phishing scams and calls for payouts to be made to affected scam victims where these duties are breached. While this new proposal only includes unauthorised payments, it is unique because it is the first such official proposal that includes telcos in the reimbursement process. Earlier this year, the National Anti-Scam Centre in Australia, announced the start of an investment scam fusion cell to combat investment scams. The fusion cell includes representatives from banks, telcos, and digital platforms in a coordinated effort to identify methods for disrupting investment scams to minimise scam losses. To add to that, in November 2023, Australian banks announced the introduction of confirmation-of-payee system that is expected to help reduce scams by ensuring customers can confirm they are transferring money to the person they intend to, similarly to what has been done in the UK a few years ago.

Finally, over the past few months, more jurisdictions such as Australia, Brazil, the EU and Hong Kong, have announced either proposals or the roll out of fraud data sharing schemes between banks and financial institutions. While not all of these schemes are directly tied to social engineering scams, they could be seen as a first step to tackle scams together with other types of fraud.

While many jurisdictions beyond the UK are still in the early stages of the legislative process to protect consumers from scams, there is an expectation that regulatory changes that prove to be successful in the UK could be adopted elsewhere. This should help introduce better tracking of the problem, to stimulate collaboration between financial insitutions, and add visibility of financial instituitions efforts to prevent these types of fraud. As more countries introduce new regulations and more financial institutions start monitoring their systems for scams occurrences, the industry should be able to achieve greater success in protecting consumers and mitigating APP fraud and social engineering scams.

How financial institutions can prevent APP fraud

Changing regulations have initiated the first liability shifts towards financial institutions when it comes to APP fraud, making fraud prevention measures a greater area of concern for many leaders in the industry. Now the responsibility is spreading across both the sending and receiving payment provider, they also need to improve monitoring for incoming payments. What’s more, as these types of fraud are a global phenomenon, financial institutions from multiple jurisdictions might consider taking greater fraud prevention steps early on (before regulators impose any mandatory rules) to keep their customers safe and their reputation high. Here are five ways businesses can keep customers safe, while retaining brand reputation:

  1. Advanced analytics – advanced data analytics capabilities to create a 360° of individuals and their behaviour across all connected current accounts. This supports more sophisticated and effective fraud risk analysis that goes beyond a single transaction. Combining it with a view of fraudulent behaviours beyond the payment institution’s premises by adding the ability to ingest data from multiple sources and develop models at scale allows businesses to monitor new fraud patterns and evolving threats.
  2. Behavioural biometrics – used to provide insights on indicators such as active mobile phone calls, session length, segmented typing, hesitation, and displacement to detect if the sender is receiving instructions over the phone or if they show unusual behaviour during the time of the transaction.
  3. Transaction monitoring and anomaly detection – required to monitor sudden spikes in transaction activity that are unusual for the sender of the funds as well as mule account activity on the receiving bank’s end.
  4. Fraud data sharing capabilities – sharing of fraud data across multiple organisations can help identify and stop risky transactions early, in addition to mitigation of mule activity and fraudulent new accounts opening.
  5. Monitoring of newly opened accounts – used to detect fake accounts or newly opened mule accounts.

By leveraging a combination of these capabilities, financial institutions will be better prepared to cope with new regulations and support their customers in APP fraud.

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Quadrant 2023 SPARK Matrix