Blog post -
Seeing is disbelieving: How deepfake technology is advancing
Matt Lewis, Group Commercial Director, NCC Group
In early 2020, NCC Group partnered with University College London (UCL) on a research project to investigate the capabilities of various free and open source deepfake toolkits. We sought to answer two questions with the research – how easy was it to use deepfake technology to create convincing deepfakes, and what methods might we employ to detect deepfakes? At the time the visual quality of our attempts were pretty good – but still fairly noticeable to the human eye. In terms of potential detection methods we identified digital watermarking as a possible contender, albeit with its own limitations in real-world application.
It’s been over two years since this research, and since then, deepfake technology has become more sophisticated and readily available. It’s also a mainstay on TV, with the BBC recently releasing a second series of the UK TV series, The Capture, a thriller exploring the use of deepfakes for political deception and criminal framing. So, let’s revisit today’s deepfake landscape, from technological developments to societal implications.
Commoditization of deepfakes has advanced at pace. There are now many, often novelty, apps available for users to create and play around with deepfakes. Improvements to leading open source deepfake toolkits such as DeepFaceLab have also continued, rendering the technology more performant and realistic.
That’s not to say deepfakes are only being used frivolously. Alarmingly, we’ve seen recent uses in disinformation contexts – notably a deepfake video of Ukrainian President Volodymyr Zelensky which was circulated on social media and appeared to have him tell his soldiers to lay down their weapons and surrender in their fight against Russia. So, The Capture’s plotline of a politician battling against a manipulated video that makes it appear they are endorsing plans they in fact oppose isn’t so far from real life after all.
Another sinister and sadly fatal case came earlier this year, with a young Egyptian girl taking her own life after she was allegedly blackmailed with deepfake pornography created using her imagery.
As well as the technology being more readily available, we’re also seeing more real-time generation of deepfakes. Methods include hooking into a computer’s webcam to allow impersonation as others in real-time, for example on video conferencing calls. Again, open source solutions already exist in this space such as the Deepfake Offensive Toolkit and DeepFaceLive.
In a business context, Forbes recently reported on warnings from the FBI about deepfake fraud being used in a new type of attack: Business Identity Compromise (BIC). This sees deepfake tools used to create "synthetic corporate personas" or imitate existing employees with significant financial and reputational impacts to victim organizations. In 2021, we saw this used to trick European MPs, who were targeted by deepfake video calls imitating Russian opposition.
The need for deepfake detection and blocking
So deepfakes used in a harmful or offensive context are very much here and likely to stay, being actively used by all manner of threat actors from disgruntled lovers to organized crime, to hostile nation states. In 2021 the European Parliamentary Research Service (EPRS) released a study on tackling deepfakes in European policy. The study summarized a range of categories of risks associated with deepfakes, ranging from psychological, financial and societal harm. It included risks such as extortion, bullying and defamation, to stock-price manipulation, election interference and damage to national security.
There is an urgent need for deepfake detection and blocking. Various approaches are emerging from academia on deepfake detection, such as using active illumination of a person’s face during a video call, and self-blended images. It can however take time for new research to find its way into commercial and mainstream use; in the meantime, deepfake technology might find a way to bypass detection mechanisms, in the classic cat-and-mouse game between detection and bypass. All too familiar in the cyber security world.
Propagation of deepfakes heavily relies on social media sharing. Given this, social media platforms ought to provide a level of deepfake detection and blocking. In this regard, Facebook (Meta AI) have been researching methods to detect deepfakes and identify where they originated.
When it comes to Business Identity Compromise (BIC), in addition to detection, businesses may also need to consider changes to current approval and workflow processes. For example, where a process may allow for execution of a critical operation or high-end financial transaction, requiring a two-person rule (if performed over video conferencing) might make it that much harder for a successful deepfake attack. Similarly, requiring an in-person physical presence, particularly now that most global pandemic restrictions are lifted, may be a necessity to mitigate the risks in this domain.
Legislation and regulation
While legislation and regulation around deepfake abuse won’t stop motivated attackers, or even lead to prosecution in the case of nation state involvement, it is still necessary to ensure that prosecutions can and will happen for those who do abuse the technology and who are discovered and identified in doing so.
Despite the demand for legislation in this domain, concerns exist on a lack of progress, particularly across the EU and UK. The National Law Review writes: “In the UK, the answer is that English law is wholly inadequate at present to deal with deepfakes. The UK currently has no laws specifically targeting deepfakes and there is no ‘deepfake intellectual property right’ that could be invoked in a dispute. Similarly, the UK does not have a specific law protecting a person’s ‘image’ or ‘personality’.”
It creates circumstances where in the UK, people will need to rely on a combination of different rights and laws, that might not currently go far enough to protect those dealing with the malicious use of their image through deepfakes. As ever with rapidly advancing technology, we must ensure these advances do not outpace legislative and regulatory frameworks protections.
Deepfakes are here to stay – how do we ensure safe use?
Deepfakes are no longer mere plot mechanisms for TV dramas. The technology is here to stay and will continue to advance in ease-of-use, accessibility and realism. As such, it will continue to be abused by all manner of threat actors and their respective motives.
We therefore need urgent and continued research on deepfake detection and blocking mechanisms, while legislation and regulation needs to step up the pace to help curb abuse, and prosecute wherever possible those who seek to abuse this technology.