Deepfake: The End of Video Authenticity?

Posted on 27 October 2019

Seeing is believing… until it isn’t. The latest AI tech is being used to create convincing duplicates of public figures that can do and say anything on video. Here’s what you need to know…

 

Deepfakes are taking the phenomenon of fake news to a whole new level. Often used publicly for comedy and entertainment, many may consider them a harmless addition to their online user experience. However, deepfakes have the potential to impact politics, public trust, and democracy in a big way by turning AI towards creating incriminatory videos and ruining the reputations of those we trust.

 

What Are Deepfakes?

Coined in 2017, a deepfake is a term for a video created using a neural network – a kind of machine-learning model that can simulate the likeness of one person onto the movements of another. Think Mission Impossible when a questionable character finishes a shady transaction then hurries to the bathroom, only to rip off their mask and reveal it was Tom Cruise all along.

Deepfakes are the digital version of this phenomenon – using sophisticated artificial intelligence to superimpose the face of a politician, celebrity, or trusted public figure onto the head of an actor in order to create custom footage that is almost impossible to differentiate from the real thing.

 

How Are They Created?

Rather than spending countless hours editing footage manually, deepfakes make use of deep learning technology that applies neural net simulation to massive data sets. This sophisticated AI will effectively analyse video footage to learn what the source face looks like at several different angles and then use algorithms to accurately transpose that face onto an actor, as if it were a mask.

Accurate deepfake videos are created using two competing yet complimentary AI systems; a generator and a discriminator. The generator creates fake video clips and tries to fool the discriminator, every time the discriminator can accurately spot a fake, it will inform the generator of what not to do, this is called a Generative Adversarial Network (GAN).

While the generator gets better at its job, the discriminator gets better in tandem, creating a training dataset that outputs better and better forgeries. If the generator can fool a highly sophisticated discriminator, it can fool you too.

 

How Could They Influence Us?

Fake News was the phrase on everybody’s lips in the run up to the 2020 presidential election, with social media heavyweights like Facebook putting further identity checks in place for political campaigners that want to run ads, to ensure accuracy and honest practice. While checking for a source has become standard practice, and “don’t believe what you read on the internet” has become a common phrase, should we also not believe our eyes?

It wouldn’t be an overreaction to say that if this tech is convincingly used to defame politicians and trusted representatives, democracy itself could be threatened. Misinformation can be created and spread at breakneck speed via social media and even official news outlets could be fooled.

In a post-truth world where being the first to report on the most sensational (and clickable) headlines is more important than verifying facts, reputation-destroying falsehoods could be spread like wildfire. A doctored video could be viewed millions of times before it is debunked, resulting in the ending of careers, a total shift in the political narrative, and – in extreme cases – could even swing elections.

While one danger that deepfakes pose is that people will take these doctored videos at face value, another is that people will lose complete faith in the validity of any video content altogether and tune out entirely. Will “that wasn’t me, it was a deepfake” become the get out of jail free card for every disgraced politician?

 

What Can Be Done?

Deepfakes are a growing threat that politicians are taking seriously, with lawmakers in the US, UK, and China all looking to regulate the generation of deepfakes and criminalise malicious use. In addition to this, several tech companies are trying to tackle the worrying phenomenon.

Faculty is a UK based start-up that generate thousands of deepfakes using all the main deepfake algorithms available on the market. Their aim is to compile a library that will help train a kind of machine-learning detective system to accurately distinguish real videos from fakes and adapt as quickly as new deepfake tech emerges.

Meanwhile, Amber – a New York based company – are trying to solve this problem in a totally different way. Their work involves the creation of software embedded in smartphone cameras to act as a kind of watermark, or truth layer, which can used to verify a video’s authenticity in perpetuity.

 

How To Spot A Deepfake:

Although discerning fact from fiction is becoming ever more difficult thanks to sophisticated deepfake footage, there are still a few tell-tale signs to look out for:

  • An irregular blink-rate
  • Background distortion
  • Facial glitching during sudden movements
  • Distorted face shape, particularly near the forehead or chin
  • Slightly altered voice and irregular pauses
  • Words that don’t match up with hand gestures and facial expressions

Uncharacteristically strange behaviour is probably the most obvious indicator; if the figure seems to be doing/saying things that are totally out of character, seek verification either from the source themselves or a trusted news outlet first before believing what you see. As lifelike renderings of public figures are becoming more and more believable, this may be the only way to tell fact from fiction.

Meanwhile, we’re left to adapt. After all, this isn’t the first time that technological advances have been used for dubious schemes – and it certainly won’t be the last. But as long as we all keep out a discernible eye, we might just be able to overcome deepfakes and what they mean for authenticity.

 

We can’t tell you if it really is Donald Trump you’re watching, but we can help protect your organisation in other ways. Get in touch with our helpful team to find out more.

A few people we've already done it for
X