In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people’s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.
These scams are unsettlingly simple. Fraudsters need only a few seconds of someone’s voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.
A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What’s more worrying is that 46% of those surveyed didn’t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.
People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.
See Related: OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices
Preventive Measures By Sterling Bank
Starling Bank is urging people to take steps to protect themselves by agreeing on a “safe phrase” with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.
The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.
Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.
Starling Bank’s advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it’s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.