Capable as it is of investigating targets and producing convincing « deepfakes », artificial intelligence (AI) is a boon for launching social engineering attacks. It enables perpetrators to manipulate their targets more effectively, thereby requiring organisations to take more effective account of the human factor in their cybersecurity.

« The End of Truth. Politics, Love, Music: How We Can Be Deceived by Artificial Intelligence, » said the front-page headline of German magazine Der Spiegel, in its issue of 8 July 2023. And rightly so. With the emergence of ‘deepfake’ images, such as those depicting the Pope in a nightclub or Barack Obama insulting Donald Trump, the line between fiction and reality has never seemed so blurred.

In the context of a social engineering cyberattack where the attacker utilises a social interaction to benefit from their target’s vulnerabilities, this confusion between fiction and reality, made possible by AI, can be exploited to deceive people more effectively. The victim thus more easily behaves as the perpetrator wants, enabling the latter to compromise an information system or carry out criminal activity such as extortion.

AI can be used improperly in this way during the different stages of a social engineering cyberattack: from researching information about the target, to the approach phase, to execution of the attack. Given the growing sophistication of AI, this kind of malicious use also obliges organisations to step up the human element in their cybersecurity.

AI amplifies the leverage effect to deceive more effectively

The famous hacker Kevin Mitnick, who passed away in July 2023, theorised about the leverage effect in his book « The Art of Deception« . This principle refers to the notion of increasing the capacity to exploit human flaws by collecting information about the target. According to a Europol report entitled « Malicious Uses and Abuses of Artificial Intelligence« , certain AI programmes amplify this very leverage effect, by easily gathering a wealth of information about the target in order to deceive more effectively.

For example, one such tool is called « Eagle Eyes ». From a photograph of a physical target, the tool can find social media accounts associated with that person using a facial recognition algorithm. The « Raven » programme is also promoted on specialist forums. This is used to collect a great deal of information about the employees of an organisation.

From the organisation name, the programme finds all possible matches for its employees who are on LinkedIn, and extracts their data in order to work out their electronic contact details. The Lusha extension, which is able to retrieve e-mail addresses and telephone numbers through LinkedIn, Twitter (now named « X »), Gmail or Salesforce accounts, can be used alongside Raven.

Malicious use of such information harvested by AI programmes is a fearsome prospect. Let’s imagine a situation where a cyberattacker wants to compromise an organisation’s information system. First they use « Eagle Eyes » to identify the Facebook (now « Meta ») account of one of the company directors, which is very discreet (as it is under a pseudonym). Through an account created for this precise purpose, the attacker manages to become « Facebook friends » with the director. By analysing the director’s profile, the attacker discovers that the director has a daughter, a young student. The attacker sends her a « Friend request », which she accepts. In one of her posts, she writes in a comment to one of her friends: « The estate agency just accepted my application, and my dad has said he’ll be my guarantor. So happy! «

Thanks to the Raven tool, the cyberattacker has the director’s work e-mail address. The attacker then creates a new e-mail address purporting to be that of an estate agency, and sends the organisation director an e-mail explaining that he needs to sign the attached document « to finalise his daughter’s apartment rental contract ». The director opens the e-mail and clicks on the attachment… which contains malware.

There is no doubt that as these kinds of AI program continue to develop, social engineering cyberattacks will become increasingly personalised and targeted. In this way, a cyberattacker will be able to optimise their assault, to increase the chances of convincing the victim that their intentions are genuine, while in fact designed to deceive.

Ever more sophisticated trickery

In early 2020, as was revealed in Forbes magazine, a branch manager of a Japanese company received a telephone call from a man whose voice was that of the company director. The voice asked the branch manager to transfer funds totalling $35 million, as the company was about to make an acquisition. The branch manager, believing this request to be true as it came from a voice he knew very well, began making the transfers. Unfortunately for him, he had fallen victim to voice cloning.

Today, there are many machine learning programmes that enable cyberattackers to clone a voice in order to use it against their targets. For example, certain specialist forums promote a tool called SV2TTS, which can generate spoken content from a text by harnessing just a few seconds of a voice recording. Here again, AI tools that easily gather information about the target are invaluable, in that, for example, it becomes easy to clone the voice of someone close to the victim.

In addition, an audio « deepfake » can be combined with a video « deepfake » thanks to machine learning programmes such as « DeepFaceLab ». In this way, in northern China, a scammer recently convinced their victim to send him 4.3 million yuan (around €600,000) by pretending to be one of the victim’s friends in a video call using AI, according to a report by Tom’s Guide.

Deepfakes can also be effective for extracting sensitive information in phishing attacks. And it’s wise to be cautious, because as noted by the company VMware in its 2022 annual report, out of 125 cybersecurity professionals interviewed, 66% say they have witnessed cyberattacks that make use of « deepfake » technology; this figure is 13% higher than in 2021. These attacks employ a video format more often than audio (58% vs. 42% respectively), and the majority are carried out via e-mail (78%). It is highly likely that this upward trend will continue as the AI tools enabling the creation of « deepfakes » are made more accessible to everyone.

Lastly, text-generating AI tools are also used a lot by cyberattackers to deceive their victims more effectively. As observed in a recent study by the company Darktrace, entitled « Generative AI: Impact on E-mail Cyber-Attacks, » there has been « a 135% increase in novel social engineering attacksacross thousands of active Darktrace/Email customers from January to February 2023, corresponding to the widespread adoption of ChatGPT. » And it’s no wonder: a scam e-mail created by ChatGPT contains no spelling or grammar mistakes, and therefore the victims often perceive it as having no malicious intent, according to the study.

Reinforcing the human factor in cybersecurity

Even though AI in organisations’ cybersecurity enables huge technological advances, with the potential to replace many human tasks, it simultaneously creates a need to take greater account of the human factor. In fact, as social engineering becomes more sophisticated, notably due to the construction of immersive architectures of fictional data, made possible by « deepfakes » , these advances call for a real « cognitive turning point in cybersecurity, » explains Bruno Teboul, Director and Founder of Neurocyber, a company that specialises in combating cybercrime by means of cognitive neurosciences.

To achieve this cognitive turnaround it is essential to factor in the vulnerabilities of the human mind, such as cognitive biases, so that these can be « diagnosed and put to the test using psychological testing, » Bruno Teboul continues. According to Gartner, this turning point looks to be one of the major emerging trends: « By 2027, 50% of large enterprise CISOs will have adopted human-centric security design practices, » partly due to « AI-based fraud, » according to the company’s “Gartner predicts 2023 report.

Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.