Janine brings over 35 years of experience from her leadership roles in every aspect of Fraud/CIP, Risk and Compliance; AML/BSA responsibilities. She brings depth and breadth of experience for our clients based on her diverse expertise and client partnerships in the financial services sector. Janine is an industry expert in the credit and banking industry with deep discipline in Fraud Operations, Investigations, Credit, DDA, Internal Fraud, Cyber Fraud and Compliance. Wide-ranging experience with Regulatory Agencies, Law Enforcement, Industry Partners and networking groups. Assisted in the implementation and roll-out of many fraud acquisition and transactional strategies. AML Software and processing platforms.

-- Janine Movish

All posts by Janine Movish

Loading...

Have you heard about the mischievous ghosts haunting our educational institutions? No, I am not talking about Casper's misfit pals. These are the infamous ghost students! They are not here for a spooky study session, oh no! They are cunning fraudsters lurking in the shadows, pretending to be students who never attend classes. It is taking ghosting to a whole new level. Understanding ghost student fraud Ghost student fraud is a serious and alarming issue in the educational sector. The rise of online classes due to the pandemic has made it easier for fraudsters to exploit application systems and steal government aid meant for genuine students. Community colleges have become primary targets due to slower adoption of cybersecurity defenses. It is concerning to hear that a considerable number of applications, such as in California (where Social Security numbers are not required at enrollment), are fictitious, with potential losses in financial aid meant for students in need. The use of stolen or synthetic identities in creating bot-powered applications further exacerbates the problem. The consequences of enrollment fraud can have a profound impact on institutions and students. The recent indictment of individuals involved in enrollment fraud, where identities were stolen to receive federal student loans, highlights the severity of the issue. Unfortunately, the lack of awareness and inadequate identity document verification processes in many institutions make it difficult to fully grasp the extent of the problem. What is a ghost student? Scammers use different methods to commit ghost student loan fraud, including creating fake schools or enrolling in real colleges. Some fraudsters use deceitful tactics to obtain the real identities of students, and then they use it to fabricate loan applications. Types of ghost loan fraud, include: Fake loan offers: Fraudsters contact students via various channels, claiming to offer exclusive student loan opportunities with attractive terms and low interest rates. They often request personal and financial information including their SSN and bank account information and use it to create ghost loans. Identity theft: Threat actors will steal personal info through data breaches or phishing. They will then forge loan applications using the victim’s identity. Targeting vulnerable individuals: Ghost student loan fraud tends to prey on those already burdened by debt. Scammers may target borrowers with poor credit history, promising loan forgiveness or debt consolidation plans in exchange for a fee. Once the victim pays, the fraudsters disappear. Ultimately, addressing ghost student fraud requires a multi-faceted approach involving collaboration between educational institutions, government agencies, and law enforcement to safeguard the accessibility and integrity of education for all deserving students. Safeguarding the financial integrity of educational institutions One powerful weapon in the battle against ghost student fraudsters is the implementation of robust identity verification solutions. Financial institutions, online marketplaces, and government entities have long employed such tools to verify the authenticity of individuals, and their application in the educational domain can be highly effective. By leveraging these tools, institutions can swiftly and securely carry out synthetic fraud detection and confirm the identity of applicants by cross-referencing multiple credible sources of information. For instance, government-issued IDs can be verified against real-time selfies, email addresses can be screened against reliable databases, and personally identifiable information (PII) can be compared to third-party dark web data to detect compromised identities. Clinching evidence from various sources renders it nearly impossible for fraudsters to slip past the watchful eyes of enrollment officers. Moreover, implementation of identity verification measures can be facilitated through low-code implementation, ensuring seamless integration into existing enrollment workflows without requiring extensive technical expertise or incurring exorbitant development costs. To further fortify security measures, educational institutions may consider incorporating biometric enrollment and authentication solutions. By requiring face or voice biometrics for accessing school resources, institutions can create an additional layer of protection against fraudsters and their ethereal counterparts. The reluctance of fraudsters to enroll their own biometric data serves as a powerful deterrent against their intrusive activities. Taking action By adopting these robust measures, higher educational institutions can fortify their defenses against ghost student fraud and maintain the integrity of their finances. The use of online identity verification methods and biometric authentication systems not only strengthens the enrollment process but serves as a stringent reminder that there is no resting place for fraudsters within the hallowed halls of education. To learn more about how Experian can help you leverage fraud prevention solutions, visit us online or request a call. *The SSN Verification tool, better known as eCBSV is also a tool that can be utilized to verify SSN.  *This article leverages/includes content created by an AI language model and is intended to provide general information.

Published: October 18, 2023 by Janine Movish

"Grandma, it’s me, Mike.” Imagine hearing the voice of a loved one (or what sounds like it) informing you they were arrested and in need of bail money. Panicked, a desperate family member may follow instructions to withdraw a large sum of money to provide to a courier. Suspicious, they even make a video call to which they see a blurry image on the other end, but the same voice. When the fight or flight feeling settles, reality hits. Sadly, this is not the scenario of an upcoming Netflix movie. This is fraud – an example of a new grandparent scam/family emergency scam happening at scale across the U.S. While generative AI is driving efficiencies, personalization and improvements in multiple areas, it’s also a technology being adopted by fraudsters. Generative AI can be used to create highly personalized and convincing messages that are tailored to a specific victim. By analyzing publicly available social media profiles and other personal information, scammers can use generative AI to create fake accounts, emails, or phone calls that mimic the voice and mannerisms of a grandchild or family member in distress. The use of this technology can make it particularly difficult to distinguish between real and fake communication, leading to increased vulnerability and susceptibility to fraud. Furthermore, generative AI can also be used to create deepfake videos or audio recordings that show the supposed family member in distress or reinforce the scammer's story. These deepfakes can be incredibly realistic, making it even harder for victims to identify fraudulent activity. What is Generative AI? Generative artificial intelligence (GenAI) describes algorithms that can be used to create new content, including audio, code, images, text, simulations, and videos. Generative AI has the potential to revolutionize many industries by creating new and innovative content, but it also presents a significant risk for financial institutions. Cyber attackers can use generative AI to produce sophisticated malware, phishing schemes, and other fraudulent activities that can cause data breaches, financial losses, and reputational damage. This poses a challenge for financial organizations, as human error remains one of the weakest links in cybersecurity. Fraudsters capitalizing on emotions such as fear, stress, desperation, or inattention can make it difficult to protect against malicious content generated by generative AI, which could be used as a tactic to defraud financial institutions. Four types of Generative AI used for Fraud: Fraud automation at scale Fraudulent activities often involve multiple steps which can be complex and time-consuming. However, GenAI may enable fraudsters to automate each of these steps, thereby establishing a comprehensive framework for fraudulent attacks. The modus operandi of GenAI involves the generation of scripts or code that facilitates the creation of programs capable of autonomously pilfering personal data and breaching accounts. Previously, the development of such codes and programs necessitated the expertise of seasoned programmers, with each stage of the process requiring separate and fragmented development. Nevertheless, with the advent of GenAI, any fraudster can now access an all-encompassing program without the need for specialized knowledge, amplifying the inherent danger it poses. It can be used to accelerate fraudsters techniques such as credential stuffing, card testing and brute force attacks. Text content generation In the past, one could often rely on spotting typos or errors as a means of detecting such fraudulent schemes. However, the emergence of GenAI has introduced a new challenge, as it generates impeccably written scripts that possess an uncanny authenticity, rendering the identification of deceit activities considerably more difficult. But now, GenAI can produce realistic text that sounds as if it were from a familiar person, organization, or business by simply feeding GenAI prompts or content to replicate. Furthermore, the utilization of innovative Language Learning Model (LLM) tools enables scammers to engage in text-based conversations with multiple victims, skillfully manipulating them into carrying out actions that ultimately serve the perpetrators' interests. Image and video manipulation In a matter of seconds, fraudsters, regardless of their level of expertise, are now capable of producing highly authentic videos or images powered by GenAI. This innovative technology leverages deep learning techniques, using vast amounts of collected datasets to train artificial intelligence models. Once these models are trained, they possess the ability to generate visuals that closely resemble the desired target. By seamlessly blending or superimposing these generated images onto specific frames, the original content can be replaced with manipulated visuals. Furthermore, the utilization of AI text-to-image generators, powered by artificial neural networks, allows fraudsters to input prompts in the form of words. These prompts are then processed by the system, resulting in the generation of corresponding images, further enhancing the deceptive capabilities at their disposal. Human voice generation The emergence of AI-generated voices that mimic real people has created new vulnerabilities in voice verification systems. Firms that rely heavily on these systems, such as investment firms, must take extra precautions to ensure the security of their clients' assets. Criminals can also use AI chatbots to build relationships with victims and exploit their emotions to convince them to invest money or share personal information. Pig butchering scams and romance scams are examples of these types of frauds where AI chatbots can be highly effective, as they are friendly, convincing, and can easily follow a script. In particular, synthetic identity fraud has become an increasingly common tactic among cybercriminals. By creating fake personas with plausible social profiles, hackers can avoid detection while conducting financial crimes. It is essential for organizations to remain vigilant and verify the identities of any new contacts or suppliers before engaging with them. Failure to do so could result in significant monetary loss and reputational damage. Leverage AI to fight bad actors In today's digital landscape, businesses face increased fraud risks from advanced chatbots and generative technology. To combat this, businesses must use the same weapons than criminals, and train AI-based tools to detect and prevent fraudulent activities. Fraud prediction: Generative AI can analyze historical data to predict future fraudulent activities. By analyzing patterns in data and identifying potential risk factors, generative AI can help fraud examiners anticipate and prevent fraudulent behavior. Machine learning algorithms can analyze patterns in data to identify suspicious behavior and flag it for further investigation. Fraud Investigation: In addition to preventing fraud, generative AI can assist fraud examiners in investigating suspicious activities by generating scenarios and identifying potential suspects. By analyzing email communications and social media activity, generative AI can uncover hidden connections between suspects and identify potential fraudsters. To confirm the authenticity of users, financial institutions should adopt sophisticated identity verification methods that include liveness detection algorithms and document-centric identity proofing, and predictive analytics models. These measures can help prevent bots from infiltrating their systems and spreading disinformation, while also protecting against scams and cyberattacks. In conclusion, financial institutions must stay vigilant and deploy new tools and technologies to protect against the evolving threat landscape. By adopting advanced identity verification solutions, organizations can safeguard themselves and their customers from potential risks. To learn more about how Experian can help you leverage fraud prevention solutions, visit us online or request a call

Published: August 24, 2023 by Alex Lvoff, Janine Movish

Money mule fraud is a type of financial scam in which criminals exploit individuals, known as money mules, to transfer stolen money or the proceeds of illegal activities. Money mule accounts are becoming increasingly difficult to distinguish from legitimate customers, especially as criminals find new ways to develop hard-to-detect synthetic identities. How money mule fraud typically works: Recruitment: Fraudsters seek out potential money mules through various means, such as online job ads, social media, or email/messaging apps. They will often pose as legitimate employers offering job opportunities promising compensation or claiming to represent charitable organizations. Deception: Once a potential money mule is identified, the fraudsters use persuasive tactics to gain their trust. They may provide seemingly legitimate explanations like claiming the money is for investment purposes, charity donations or for facilitating business transactions. Money Transfer: The mule is instructed to receive funds to their bank or other financial account. The funds are typically transferred from other compromised bank accounts obtained through phishing or hacking. The mule is then instructed to transfer the money to another account, sometimes located overseas. Layering: To mask the origin of funds and make them difficult to trace, fraudsters will employ layering techniques. They may ask the mule to split funds into smaller amounts, make multiple transfers to different accounts, or use various financial platforms such as money services or crypto. Compensation: The money mule is often promised a percentage of transferred funds as payment.  However, the promised monies are lower than the dollars transferred, or sometimes the mule receives no payment at all. Legal consequences: Regardless whether mules know they are supporting a criminal enterprise or are unaware, they can face criminal charges. In addition, their personal information could be compromised leading to identity theft and financial loss. How can banks get ahead of the money mule curve: Know your beneficiaries Monitor inbound paymentsEngage identity verification solutionsCreate a “Mule Persona” behavior profileBeware that fraudsters will coach the mule, therefore confirmation of payee is no longer a detection solution Educate your customers to be wary of job offers that seem too good to be true and remain vigilant of requests to receive and transfer money, particularly from unknown individuals and organizations. How financial institutions can mitigate money mule fraud risk When new accounts are opened, a financial institution usually doesn’t have enough information to establish patterns of behavior with newly registered users and devices the way they can with existing users. However, an anti-fraud system should catch a known behavior profile that has been previously identified as malicious. In this situation, the best practice is to compare the new account holder’s behavior against a representative pool of customers, which will analyze things like: Spending behavior compared to the averagePayee profileSequence of actionsNavigation data related to machine-like or bot behaviorAbnormal or risky locationsThe account owner's relations to other users The risk engine needs to be able to collect and score data across all digital channels to allow the financial institution to detect all possible relationships to users, IP addresses and devices that have proven fraud behavior. This includes information about the user, account, location, device, session and payee, among others. If the system notices any unusual changes in the account holder’s personal information, the decision engine will flag it for review. It can then be actively monitored and investigated, if necessary. The benefits of machine learning This is a type of artificial intelligence (AI) that can analyze vast amounts of disparate data across digital channels in real time. Anti-fraud systems based on AI analytics and predictive analytics models have the ability to aggregate and analyze data on multiple levels. This allows a financial institution to instantly detect all possible relationships across users, devices, transactions and channels to more accurately identify fraudulent activity. When suspicious behavior is flagged via a high risk score, the risk engine can then drive a dynamic workflow change to step up security or drive a manual review process. It can then be actively monitored by the fraud prevention team and escalated for investigation. How Experian can help Experian’s fraud prevention solutions incorporate technology, identity-authentication tools and the combination of machine learning analytics with Experian’s proprietary and partner data to return optimal decisions to protect your customers and your business. To learn more about how Experian can help you leverage fraud prevention solutions, visit us online or request a call

Published: August 14, 2023 by Alex Lvoff, Janine Movish

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe