Security Snippets: Deepfake video being used for social engineering

Increasingly available Deepfake technology that can be used to impersonate employees is increasing the level of social engineering risk.

Deepfake technology is increasingly being used against corporations to carry out social engineering attacks.  Video calls, in which one or more of the participants are actually threat actor-generated Deepfakes are being used to convince employees to send fraudulent wires or pursue threat actors’ other objectives.

Deepfake technology often feature digital recreations of known people and places. Deepfakes are traditionally thought of as independent videos, but using deepfakes on video calls is on the rise.

Although the Deepfake technology has existed for a while, it has not been directed at corporations for social engineering purposes, on a significant scale, until recently. A number of hacker-focused tools have been released, and they have become increasingly easy to use.  The dark web now has services that can produce realistic fake personas quickly and at a low cost to a purchaser. And, as the technology has developed, deepfakes are becoming increasingly difficult to spot.

These types of attacks have already begun to cost companies significant funds.  For example, a threat actor recently convinced an employee to wire $25.6 million to a fraudulent account by setting up a video call with that employee, which was joined by several participants who were deepfakes of the chief financial officer and other executives.

This development is particularly problematic because video calls were previously seen as one of the best ways to verify the identity of a person you are communicating with, in order to counter social engineering threats.  That technique is no longer as reliable.

It is important that people who work in finance, and particularly accounts payable, exercise extreme caution any time they receive instructions to change a payee’s bank account or to send money to an account they do not regularly send funds to.  Methods such as calling the requestor’s phone using a phone number pulled from a reliable source (not from their email signature, which could be controlled by the threat actor), talking to the person face-to-face, instant messenger, or initiating a real-time video chat, remain options for verification.  But, at this point, only face-to-face communication is completely reliable, so companies may want to consider requiring that personnel verify the identity of the requestor using more than one communication channel when face-to-face confirmation is not feasible.

 

Authored by Nathan Salminen and Rachel Dalton.

Contacts
Nathan Salminen
Partner
Washington, D.C.
Rachel Dalton
Associate
Washington, D.C.

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.