Cybercriminals continue to develop new methods and technologies to commit fraud by gaining access to confidential information and hacking accounts. Beyond using traditional cybercrime methods – like phishing emails and malware – cybercriminals are exploiting trust, human error and companies’ vulnerabilities. They are increasingly leveraging voice technologies as a new way to commit fraud and infiltrate organizations – and the rise of artificial intelligence (AI) is only helping them become more effective in their efforts.
AI is blurring the lines between what’s real and what isn’t. In addition to the threats it poses to a society navigating the digital world, it has the potential to bring significant reputational, financial and security risks to companies. Particularly concerning is the development of deepfake audio, which is allowing cybercriminals to execute elaborate BEC (business email compromise) scams via phone or video. Here we highlight two AI-related threats – deepfake audio and a new version of phishing called vishing – along with best practices to mitigate the risk of an incident.
Deepfake audio (aka voice swapping) uses a machine-learning algorithm to mimic the voice of a real person on the phone or in a video. For example, a cybercriminal can fake the voice of a senior executive to trick employees into believing they’re talking to someone in a position of authority and being instructed to carry out orders, such as facilitating a money transfer or sharing information.
The primary use of deepfake audio/voice swapping is to enhance Business Email Compromise (BEC) to falsely authorize payments. In a BEC scam, criminals send an email message that appears to come from a known source making a legitimate request.
Deepfake audio is one of the most advanced new forms of Artificial Intelligence (AI) underpinning cyber-attacks. The attacker creates a voice model by feeding data into a computer algorithm that contains voice samples of the mimicked individual, which are often collected from public sources such as speeches, presentations, corporate videos, and interviews. Once a sufficiently robust deepfake audio profile is built, it can be used with specialized text-to-speech software to create scripts for the fake voice to read. These can take considerable time and resources to create, and the most advanced hackers can create a voice profile by incorporating up to 20 minutes of audio.
Vishing is the criminal practice of using social engineering over the telephone to gain access to, or trick people into providing, private, personal, or financial information, usually with the promise of financial reward. The cybercriminal makes a phone call or leaves a voice message purporting to be from a reputable company in order to induce individuals to reveal personal information, such as bank details and credit card numbers. Vishing uses the same techniques as in phishing emails but is done over the phone instead.
"Never assume that what appears to be an internal message or caller is legitimate, especially if the caller is asking for sensitive information."
As technologies continue to advance and allow cybercriminals to use impersonation and AI to commit fraud, companies must prioritize best practices to reduce the risk of falling victim to these schemes. Organizations will need to educate their workforce to be on the lookout for signs of deepfake audio and vishing, among other cyber threats.
At U.S. Bank, your privacy and security are our priority. We’re constantly enhancing our systems to keep your data secure and provide seamless technology experiences. Learn more about protecting your organization with our fraud prevention checklist or contact U.S. Bank for help with your fraud prevention plan.
Related content