AI Spotlight: How Deepfaking Threatens Society
What are deepfakes and what can we do about them?
In politics, ‘truth’ is a slippery concept – which is another way of saying that lying is a tried, tested, and massively used tactic. From innocent slip-ups to flat-out lies, from history’s dictators to today’s elected officials, we’ve seen lies come in the forms of misleading news articles, words spoken from podiums, tweets, and doctored photographs. But soon we could be seeing a new, even more powerful enemy of truth: Video footage that that’s completely fabricated but which looks completely real.
We’re talking about deepfakes, which aren’t to be mistaken with your regular, run-of-the-mill doctored footage. Deepfaking goes beyond clever editing that skews narratives – which you might remember from a recent high-profile White House case. Instead, we’re talking about footage and speech that can be completely original, totally made up, and very believable. Put simply: Imagine if someone could make their political rivals say whatever they wanted? It’s one of today’s biggest tech discussions, so we decided to take a deep dive.

Deepfakes have already arrived
So, what are deepfakes? Deepfaking is a bit like automated photoshop for videos – the technology analyses thousands of frames of a person’s face to either transpose their face onto a suitable impersonator, or to create entirely new image. It’s enabled by a new form of artificial intelligence.
Deepfaking is a two-part artificial intelligence process: “generation” and “discrimination”. The generator creates the fake video and the discriminator sees how believable it is. The feedback loops between generation and discrimination ensure that the deepfake AI is consistently improving on itself.
Deepfake technology itself is surprisingly accessible and easily findable on the internet. If you’re wanting to create your own deepfake video, though, you’ll need a lot of processing power and patience as you ‘train’ the AI. You also won’t have any fake audio without an accurate impersonation or speech synthesis!
It all started getting major attention in 2018. A video of Obama calling Trump a dipshit went viral, and not just because it’s believable – “Obama” (impersonated by Jordan Peele) breaks the fourth wall and references that it’s a deepfake from the first sentence. If he hadn’t said that, though, it would be scarily convincing. The video has a strong message: Deepfake videos look, sound, and feel real – and that’s bad news.
So what are the actual dangers of deepfaking? Well, there’s the obvious fear that an entire population could be totally fooled by a totally convincing deepfake video. A political opponent could potentially cause defamation so convincing that their opponent would have no chance at clearing their name.
But the fear isn’t just about deepfakes that never get exposed. Say hypothetically that a video has actually proven to be a deepfake. The damage may have already been done: Fake news travels six times faster than real news, and as a Conservative party newsletter in the UK says “a lie can go round the whole world before the truth can get its boots on.” If footage can be proven to be either authentic or fake even a few weeks after being posted, that window of time could create a public divide too big to fix. As the Director for the Transatlantic Commission on Election Integrity, Eileen Donaghue, summarised the problem well when she said: “Once a political narrative is shifted, it’s almost impossible to bring it back to its original trajectory.” Deepfakes have the ability to cause significant damage fast.
How has deepfaking affected world politics?
If you search up deepfaking today, you’ll find a lot of articles from 2019 and early 2020 theorising about its potential effects in the 2020 U.S. presidential election. Fears about deepfaking’s potentially hugely destructive effects were felt in the highest cybersecurity positions, but those fears thankfully didn’t come to light. That said, we saw a premonition of a deepfake-prevalent future: In the runup to election, the White House posted a fairly believable deepfake edit of House speaker Nancy Pelosi. The edit was low-quality, far from the tech’s full potential, and it probably wasn’t that impactful in the grand scheme of things – but a lot of people were fooled. Repercussions could have been worse if it had been a higher quality deepfake.
There was also a strange scandal that rocked the African nation Gabon. In early 2019, the President of Gabon, Ali Bongo, had been in bad health and questions were being raised – why hadn’t he been heard from in so long? It was promised that he’d deliver the traditional New Year’s Speech – but when he did, there was widespread speculation that the footage was a deepfake. Proponents of the conspiracy argued that the establishment had reason to create a deepfake as it could give an illusion of a healthy, fit president. There’s still no consensus among experts or the public about whether or not the video was a deepfake. There's no reporting on the scandal’s damaging effects for public trust or whether the scandal affected the later attempted coup which used the President’s appearance in the video as justification. Even so, it highlights an issue about deepfakes: They can hijack public focus.
It seems that the world has been lucky for the most part, though, with a major confirmed deepfake scandal yet to happen.

How can we tackle deepfakes?
So. What’s the solution? How can countries worldwide protect their politicians and other persons of interest against fake footage? Can’t governments just legislate against deepfaking? Well, yes and no – some countries such as the US are legislating against deepfaking, but for the most part the legislation is ineffective. It’ll be a fair while before robust anti-deepfake legislation is seen worldwide. Instead of looking at legislative solutions, let’s look at immediate, practical solutions.
Does that mean governments are totally ineffective? Not at all – governments can educate people. The UK’s Center for Data Ethics and Innovation (CDEI) suggests governments worldwide should educate their populations on the existence of deepfakes, so that deepfake scandals are a familiar concept in case a scandal arises.
For the most part, though, funding deepfake detection technology is any government’s most immediate and practical action. Deepfakes are getting more sophisticated each year and so are becoming harder to detect, so detection technology needs to stay one step ahead.
Moving away from governments for a moment, tech companies themselves, from social media platforms to other big players, also have a big social responsibility for tackling deepfakes. Social media companies have a responsibility to get deepfakes off of their own platforms while companies like Microsoft are developing their own tracking technology:
- Continuing a trend kickstarted by companies like Deeptrace and Truepic, Microsoft have developed a leading deepfake detection technology. They’ve admitted that deepfakes will become increasingly harder to trace with time and intent on keeping on the deepfake tech’s heels.
- Facebook is fertile ground for misinformation and pledged in early 2020 “to work with academia, government, and businesses to expose the people behind deepfakes.”
- Twitter followed closely behind with their pledge to remove deepfakes that deliberately distort reality if they’re likely “to cause significant harm”. Their definitions are slippery at best, but it’s a good start.
- YouTube has been rocked by accusations of being a hub for fake news for years, but at the least they have committed to removing deepfakes.
And what can you do, if you yourself suspect a video might be a deepfake but there hasn’t been any clarification from the platform you've found the video on? Well, there is some technology out there available for regular people as well. If you’re interested, check out Amber, who have a deepfake-detection app for Mac and iPhone users. There’s also deepware, which is currently in Beta. It may soon be common practice for people to be using deepfake detection tech from their own homes! Plus, when a video’s been claimed to be deepfaked, make sure the source the claim comes from is reliable – it’s always a good idea to verify with multiple sources.
If social media companies stay on the ball and if deepfake detection technology stays up-to-date, we shouldn’t ever doubt whether a politician really said what they appeared to say. That means we’ll only have to verify the claims our politicians come up with, not whether or not they actually said them!