March 17, 2026

Understanding AI Scams

Artificial intelligence (AI) is being increasingly integrated into our daily lives. Many existing technologies are now regularly incorporating AI capabilities, such as customer service chatbots, intuitive voice assistants and advanced photo filters. While these tools offer convenience and drive innovation, they are also being exploited by malicious actors to perpetrate scams with a new level of sophistication and realism.

Scammers are leveraging AI to generate highly convincing emails, phone calls, voice recordings and even videos that can closely mimic trusted individuals and legitimate organizations. They are cleverly designed to exploit human psychology by creating a sense of urgency, evoking strong emotions and ultimately lowering our guard. As technology evolves, awareness and healthy skepticism are more important than ever. In 2024, the total US cybercrime losses that were reported to the FBI IC3 was $16.6 billion, a 33% year-over-year (YoY) increase.

AI's ability to process vast amounts of data, learn intricate patterns of human behavior and generate human-like content allows scammers to create more convincing and personalized attacks. This evolution poses a significant challenge, as AI-driven scams can bypass many of the traditional detection methods that rely on identifying anomalies in communication patterns or content quality.

This newsletter aims to provide a comprehensive overview of common AI-related scams, explain the underlying reasons for their effectiveness and outline practical, actionable steps you can implement to safeguard yourself and your loved ones.

 

Common AI-Related Scams

  • Voice Cloning and Family Emergency Scams: Criminals can utilize short audio clips, often readily available from social media or voicemail recordings, to imitate someone’s voice. They may often pose as family members or a coworker, claiming to be in urgent need of help or in an emergency. These AI-generated imitations are frequently employed in scenarios where the scammer poses as a family member or coworker, fabricating a narrative of urgent need or an emergency situation requiring immediate financial assistance. The uncanny accuracy of AI voice cloning can bypass initial skepticism, making victims more susceptible to the fake crisis.
  • Deepfake Videos or Images: AI-generated visuals, known as ‘deepfakes’, can make it appear as though someone said or did something they did not. These fake images or videos can convincingly depict individuals saying or doing things they never did. While sometimes used in misinformation campaigns, these media are increasingly being weaponized in fraud schemes, such as creating fake endorsements for fraudulent investment opportunities. Deepfakes can also be used to produce false evidence or undesirable images (e.g., nude images) featuring the victim, intended to extort victims into paying money for the image’s creator to delete it from the internet or prevent the criminal from sending it to the victim’s friends, families and/or colleagues.
  • AI-Enhanced Phishing: Emails and text messages written with AI tools can often seem polished, professional and highly personalized. These tools can analyze publicly available data or information from previous data breaches to tailor messages that resonate with the recipient's interests, recent activities or professional context. This level of personalization significantly increases the likelihood that a victim will engage with the malicious content, click on a fraudulent link or download an attachment infected with a virus, effectively eliminating traditional indicators of phishing like poor grammar or generic greetings.
  • Social Media Impersonation: Fake social media profiles or AI-driven conversations are created by scammers in order to pose as friends, colleagues or company representatives to gather information or request money. These chatbots are designed to engage in extensive conversations with victims. The scam typically starts with a harmless interaction, and the chatbot will pose as an actual human friend inquiring about victim’s interests, routines and personal hobbies. When the information has been gathered, the scammer can manufacture a host of different and plausible scenarios where they need financial help from the victim.
  • Investment and Cryptocurrency Scams: Scammers may use realistic deepfake videos, fake endorsements or detailed messages promising guaranteed returns or exclusive opportunities related to financial investments. The sophistication of the generated content lends an air of legitimacy to these otherwise baseless investment proposals, preying on the desire for financial gain.

 

Why These Scams Work

AI empowers scammers to produce content that feels real and tailored to the individual recipient. The effectiveness of these new scams often relies on:

  • Urgent requests that demand immediate decisions, overwhelming the victim's ability to critically evaluate the situation.
  • Ongoing and seemingly natural conversations that build trust over time.
  • The ability of AI tools to generate convincing voices, images or writing.

 

How to Protect Yourself

  • Implement a unique "safe word" or phrase within your family or workplace. This word should be used in urgent or unusual communication to verify the authenticity of a request. If the person on the other end doesn't know or use the safe word, it's a strong indicator of a scam.
  • Always verify unexpected requests or messages by contacting the person or organization through a known phone number or official channel, or by visiting their official website by typing the URL directly into your browser, rather than replying to the suspicious message or using contact details provided within it.
  • Be cautious of messages that demand immediate action or payment, especially through unconventional methods like gift cards, cryptocurrency or wire transfers.
  • Manage your digital footprint. Limit the amount of personal information shared publicly online.
  • Pay attention to small inconsistencies, such as unusual wording, payment methods or requests that seem out of character.
  • Use strong, unique passwords for all accounts and enable multifactor authentication wherever possible.

 

What to do if you are targeted

  • Stop responding to the sender immediately.
  • Contact your bank or financial institution if any financial information is shared or if you suspect unauthorized transactions.
  • Report the incident according to your organization’s reporting procedures or to appropriate authorities. For personal devices or accounts, file a complaint online with the FBI Internet Crime Complaint Center (IC3).
  • Update passwords and enable multifactor authentication where possible.

The proliferation of AI scams is a significant concern, driven by the accessibility and anonymity of the tools used to create them. Recent studies show exponential growth in AI-enabled fraud. Staying informed about the evolving tactics of how these scams work is one of the best preventative measures you can take to stay safe.

 

Cyber Habit of the Month: Verify Before You Act

Scammers often rely on urgency to push their victims into quick decisions. If you receive a call, text or email requesting money, sensitive information or immediate action, pause and verify the message or request. Contact the purported person, company or organization using a trusted phone number or official website rather than replying directly to the message. Taking an extra minute to confirm the legitimacy of a request can prevent costly mistakes.

 

Additional Resources