FEATURED ARTICLES
How To Avoid Scams in an AI-Driven World
Have you ever come across a post on social media that was composed a little too well? Maybe you’ve received a text or email with a compelling argument that tried to get you to click on a link at the end.
These days, you can never be too sure that the content you’re engaging with was made by a machine instead of a person. That’s the power of generative AI.
While this technology has opened the door to many productive possibilities, it is also being used to scam people and steal from them. The use of AI lends a new twist to a classic fraud tactic.
Let’s talk about AI-powered fraud and what you can do to protect yourself.
A Brief Introduction to Generative AI
What is generative AI? IBM defines it as a technology that utilizes “deep-learning models” to create high-quality text, images, and other content based on the data they were trained on.
Today, a leading use of generative AI is through chatbots and apps that create essays, pictures, and illustrations, all as if a human made them.
You may be interfacing with AI apps right now. For instance, several online platforms have their own models that allow users to create content virtually instantly. Examples of these AI technologies include:
- Google’s search results, which summarize top hits to give instant answers.
- ChatGPT, a chatbot capable of writing and carrying on conversations.
- Meta AI, a content generation suite used by the Meta ecosystem (Facebook, Instagram, etc.)
- Grok, an AI assistant on the social media platform X.
Generative AI can do more than converse and compose poetry. It’s also being used to make professional-grade art, lifelike photography, and even realistic videos. And increasingly, it’s becoming harder for humans to tell if something is made by such technology.
The Dangers of Generative AI
You may have seen on your social media feeds how friends and family members have used generative AI to make impressive images or help them compose a good post. It’s crucial to remember that while AI may be entertaining or make life easier, it also has the potential to cause significant harm.
Generative AI can create responses, articles, stories, and voice clips that appear to be written or spoken by humans.
Consequently, fraudsters may utilize generative AI technology to deceive people into falling for scams. The National Council on Aging provides a comprehensive write-up on AI scams, including tips on how to recognize common red flags. By using AI, scammers can:
- Craft convincing messages or emails for phishing.
- Use voice cloning to mimic a celebrity, authority figure, or acquaintance.
- Create posts with images or videos to entice users to engage with their content.
As you can see, fraudsters have access to a powerful tool that they can utilize across multiple channels. This means that generative AI technology is being used to make even more convincing scams.
While this is worrying, you should also remember that you still have the power to stop scammers in their tracks. Let’s cover what you can do to thwart AI-assisted fraud.
How To Stop AI-Powered Scams
Do you remember what makes an imposter scam work? A scammer impersonates an authority figure, family member, or friend to gain a victim’s trust, thereby persuading the victim to do what they ask.
Knowing this, AI-assisted fraud is an iteration of the imposter scam. As long as you know how to spot the tactics of an imposter, you can protect yourself—even if they’re using fancy new tools.
Refresh your knowledge with these tips:
- Exercise healthy skepticism. Requests or threats that feel out of the ordinary are one of the first signs of a scam. Do not rush to respond to these messages.
- Verify the sources contacting you. If you think something is amiss, contact an established channel before proceeding. If the sender claims they’re a family member or a representative of an organization, call said family member or that entity’s customer service center directly (never respond or use the contact information in the original message!).
- Be familiar with how official channels operate. Established businesses, organizations, and government agencies have specific protocols for contacting others. Generally speaking, they won’t call, message, or email you and ask for personal information or account credentials.
- Avoid the impulse to engage with posts immediately. Many social media posts employ tactics to evoke emotional responses. If a post contains links, verify that the user is legitimate and carefully vet the links they have shared. The post could be a setup for a phishing site.
- Avoid oversharing. AI models rely on datasets to harness their generative capabilities. Minimize how much and how often you share on social media so your writing and appearance cannot be scraped by these technologies as easily.
And as always, take a moment to reinforce your online account security:
- Update your passwords and use a password manager.
- Enable multi-factor authentication.
- Update security software.
- Back up and secure your data.
When you know and practice the fundamentals of online security, you have the power to thwart fraud. Brush up on the basics, maintain a sharp eye, and you’ll have a better chance at shutting down AI.
First Florida is your partner in staying SAFE. Visit our Scam and Fraud Education page to learn more about protecting your account information.