How AI Is Making Phishing and BEC Attacks Worse
AI, like many disruptive technologies, has sparked extensive discussions about its risks and rewards. In the cybersecurity realm, these discussions are particularly prevalent. The most frequently discussed risks relate to the future capabilities of AI. However, there is less focus on the immediate risks – specifically how Large Language Models (LLMs) increase the likelihood of successful phishing and social engineering attacks. Two of the costliest and damaging cyberattacks businesses face today are ransomware and business email compromise (BEC).
Ransomware payments globally reaching $1.1 billion in 2024, according to Chainalysis. This figure excludes the costs associated with data leakage and remediation. Nevertheless, direct payments to ransomware actors are minor compared to the costs incurred from BEC attacks. Between October 2013 and December 2022, direct reports to the FBI’s Internet Crime Complaint Centre (IC3) indicated nearly $51 billion lost to BEC globally, averaging about $5.67 billion per year. These figures are likely underestimated since many international businesses do not report BEC losses to IC3.
IBM’s 2024 Threat Intelligence Index highlights that the primary attack vectors for cybercriminals, regardless of attack type, are phishing and the compromise and use of valid accounts. Phishing attacks are typically executed via email, while valid accounts are often compromised through social engineering, which is also conducted through email.
LLM’s, such as ChatGPT, facilitate phishing and social engineering for attackers. These models, a subset of Generative AI, are designed to generate text in response to user inputs. They are already highly proficient, making it challenging in some contexts to distinguish LLM-generated text from human writing.
Traditional advice for detecting email phishing and social engineering attacks has emphasised examining the language used in emails (e.g., checking for correct spelling and grammar) and the source of the email. LLMs are making the first part of that advice obsolete, almost regardless of the language. In BEC attacks, the email comes from a trusted, yet compromised, address. Even the most vigilant individuals may respond to an email that appears to be written in the style of the person sending it and comes from their email address.
Addressing these challenges requires technology-driven solutions. Vigilant and innovative cybersecurity practices are crucial to counter these evolving threats.