Tech Made Simple

Hot Topics: How to Fix Bluetooth Pairing Problems | Complete Guide to Facebook Privacy | How to Block Spam Calls | Snapchat Symbol Meaning

We may earn commissions when you buy from links on our site. Why you can trust us.

author photo

How to Spot a Deepfake Video

by Natasha Stokes on October 21, 2019

You may have seen an old episode of The David Letterman Show where Bill Hader did an impression of Tom Cruise, mimicking Cruise’s facial expressions and mannerisms. Or maybe you caught that YouTube clip where Hader actually morphs into Cruise. Viewed over six million times since it was posted in early August, the viral video is the result of eerily realistic AI manipulation of Hader’s original impression.

That’s the light side of the deepfake, so named because it is created by deep learning algorithms and is, well, fake. However, as with so much of technology, its more sinister applications are raising debate.

Deepfakes make use of algorithms that process images, video, or audio of real people – such as celebrities and politicians, for example – in order to synthesize the content of them doing or saying things they did not. While many of those easily found online are obviously faked videos intended for entertainment – like Jim Carrey taking on Jack Nicholson’s role in The Shining – the vast majority are pornographic videos with female celebrities’ faces superimposed onto the original actors’. This has damaging implications for women’s privacy and sense of safety, not only for celebrities but for those who may be the target of revenge porn, now possible without there ever having been an explicit video in the first place.

At the same time, as the 2020 presidential election approaches, there is a growing concern that the increasing sophistication and accessibility of deepfake algorithms will be used to edit videos for political disinformation.

“Deepfake pornography is a testing ground for the capabilities of face swap, fake audio, and video augmentation,” says Mutale Nkonde, a fellow at the Berkman Klein Center for Internet & Society at Harvard University and an expert in AI policy. “In the next year, I think the most damaging consequence of this technology will be political.”

Deepfakes are supercharging fake news

As early as April last year, a deepfake created by comedian and filmmaker Jordan Peele showed Barack Obama apparently insulting President Trump before going on to describe the dangerous possibilities of the technology in a “public service announcement” to illustrate how deepfakes can be used to portray anyone saying anything.  

More recently, a doctored video that went viral in May showed House Speaker Nancy Pelosi apparently drunk, although since this editing mostly involved slowing down playback, the video has been dubbed a shallowfake. It has, however, fueled concerns that this is the beginning of a new era of souped-up fake news, with California recently passing a state law that criminalizes the creation or distribution of doctored footage of politicians within 60 days of an election.

“Humans are visual animals, and we tend to believe whatever we see with our eyes,” says Siwei Lyu, Director of Computer Vision and Machine Learning Lab (CVML) of University at Albany SUNY. “Fake news in the form of articles are effective but can be much more so if accompanied by images, audio or video. This makes misinformation campaigns much more dangerous.”

The evolution of deepfakes  

The first deepfakes surfaced in 2017 when a Reddit user posted several pornographic videos with simulated celebrity faces. Since then, deepfakes have come a long way, with fewer and fewer training images or videos required for algorithms to generate believable versions of real people’s faces. One expert predicts entirely realistic deepfakes could be possible in as little as six months.

The code for algorithms to generate deepfakes is freely available on open-source platforms such as Github. Sites that offer deepfake creation services are also popping up, charging as little as $2 an hour for deep learning algorithms to be trained on source images then generate a fake video, which can take around five to eight hours to be created, most of which is the time required for algorithms to learn from source data.

There are even smartphone apps that make use of deepfake tech. Zao is a Chinese iPhone app that swaps people’s faces into famous movies, while an app that has since been taken down, DeepNude, altered photos of women, so they appeared naked.

6 signs that a video is a deepfake

Because of the processing power required for the most sophisticated algorithms, not all deepfakes are created equal. Depending on the resources the creator has access to, a deepfake may be of low quality or obviously faked.

Yet this may not prevent it from going viral – as with the Pelosi video – for the same reasons that fake news articles get shared: a sensational reveal coupled with people’s tendency to retweet or repost – often without reading or watching to the end. In fact, according to MIT research, fake news is 70 percent more likely to be retweeted.

“Some deepfakes are really good, but most are not perfect,” says Lyu. “However, they can take advantage of the short attention span we have and our desire to retweet. That’s really the problem.”

Lyu says there are often visible signs that indicate a video has been manipulated. These visual artefacts from the algorithms used to generate a deepfake are essentially digital noise or blurriness and can often be spotted without the need for dedicated software.

1. Look for individual hairs, frizz, and flyaways

One area that is often a giveaway is the hairdo of a subject’s video – faked people don’t get frizz or flyaways because individual hairs won’t be visible. “Fake videos usually have trouble generating realistic hair,” says Lyu.

2. Watch the eyes

With the regular blinking and other minute movements that are typical of a real person, one of the biggest challenges for deepfake creators is to generate realistic eyes, “When a person is in conversation, their eyes follow the person they’re talking to. But in a deepfake, you may spot a lazy eye or an odd gaze,” says Lyu.

In a deepfake of Mark Zuckerberg supposedly talking about his control on the world’s data, for example, Zuckerberg’s gaze appears creepily, steadily fixed on the viewer.

A lack of blinking can also signal a faked person – Lyu led research to create an algorithm that detects deepfakes based on blink rates – but deepfakes have since adapted, and the latest algorithms incorporate blink patterns.

As in cybersecurity research where the evolution of malware is a response to increasingly sophisticated antivirus solutions and vice versa, “there is a perpetual competition between whoever is making deepfakes and whoever is detecting them,” says Lyu. “Everything that is used for detection can also be used to train the algorithms that create the deepfakes.”

3. Check the teeth

Like hair, teeth tend to be tough to generate individually. “Synthesized teeth normally look like a monolithic white slab because algorithms don’t have the ability to learn such a detail,” says Lyu.

4. Observe the facial profile

Does the person saying that shocking thing look a little odd when they turn away from camera? If a video has been manipulated with a face swap, they may appear to be facing the wrong direction or their facial proportions may become skewed when the subject is looking away.

5. Watch the video on the big screen

On the smaller screens of smartphones, any of these inconsistencies of a fabricated video are less likely to be visible. Watching a video in full-screen on a laptop or desktop monitor makes it easier to spot the visual artefacts of editing, and can reveal other contextual inconsistencies between the subject of the video and where they are – for example, a clip of someone purportedly in, say, the UK, but against a backdrop with a car driving on the wrong side of the road.

If you have a video editing program such as Final Cut or iMovie, you can slow down the playback rate or zoom in to examine faces.

6. Check the emotionality of the video

As with extraordinary headlines that just so happen to appear around major events – like elections or disasters – a well-placed video that tugs at the heartstrings or fuels righteous outrage may be a deepfake designed to do just that.

“We’re neurologically wired to pay more attention to what is sensational, surprising, and exciting,” says Lyu. “This overrides the need to check the authenticity of the message and is one of the many psychological factors that aid the effectiveness of fake news.”

If a video is eliciting a strong emotional reaction – from you or your social network – it could be a sign that it should be fact-checked before resharing.

Deepfakes promote a culture of mistrust

Will deepfakes become a part of nefarious political campaigning in the months to come? Lyu isn’t so sure.

“The technical capacity is there, and that’s why Congress is very concerned,” he says. “But while deepfakes are much easier to make these days, they’re relatively expensive and resource-intensive compared to a fake Facebook account sending out fake news.”

A survey by DeepTrace Labs, which builds tools to detect synthetic media, pegged the number of deepfakes online at around 15,000. A fractional percentage of the billions of videos online - but it’s a number that has nearly doubled in the last nine months.

And the problem is, deepfakes don’t have to flood social media platforms in order to sow disinformation. “If someone really wanted to wreck an election – like next year’s – they would probably not release a lot of deepfakes because that would make them a target,” says Lyu. “They’d wait for their moment until 24 hours before voting, say, then publish this perfect[ly edited] video that circulates online, strikes at the right moment and sways public opinion at the right time.”

Nor do deepfakes have to be flawless – or sophisticated – to capitalize on a growing culture of mistrust in an increasingly polarized society. Last year, the White House shared a video where CNN reporter Jim Acosta appeared to be pushing a government staffer away. After the footage went viral, it was revealed to have had a few frames cut, its source the far-right site Infowars, but many viewers weren’t convinced, illustrating how even detectable, minor manipulation can cast doubt on what really happened.

Deepfakes don’t even need to be created at all – the mere idea of media that can easily be falsified could be enough to throw the veracity of real news and real video into confusion, further eroding trust in journalists and calling into question what is true. The consequences can be even greater in countries with tenuous political situations. In Gabon last year, a controversial video of the president that was suspected to be false – experts say it’s inconclusive– catalyzed the military to attempt a coup against a government deemed to be hiding something.

Developing ways to combat deepfakes  

Meanwhile, deepfake detection is a fast-growing research field. Lyu has been working on an algorithm that detects digital flaws which can’t easily be fixed by deepfake creators, as well as a technique to introduce digital noise into online videos and images that would be undetectable by the human eye but which would obfuscate algorithms from detecting – and learning from – faces. His work is sponsored by the government agency responsible for emerging military technology, Defense Advanced Research Projects Agency (DARPA), which in the past two years has spent $68 million into deepfakes research.

“Even before the social network era, digital tampering of images, video, and audio was a problem. Having social platforms makes the problem a hundred times worse,” says Lyu. “The goal of my research is that it leads to a tool people can use to detect deepfakes.”

Other techniques being researched include programs such as NeuralHash that add a digital watermark to videos that would reveal whether they have been tampered with, and browser extensions such as Sherlock AI, currently in development, that would verify whether the content has been manipulated.

However, not everyone is convinced that the detection tools are enough to stem the potential consequences of deepfakes.

“The problem with detection software is that the technology is moving so fast – you’re always your tail. The industry is also trying to sell solutions that may not work,” says Nkonde. “We need a multipronged approach where people and politicians are educated about AI and tech policy, and the tech sector thinks about deepfakes as a problem and not a business opportunity.”

Can you successfully legislate against deepfakes?  

Nkonde is part of a team that worked on the creation of the DEEPFAKES Accountability Act, which was recently introduced to Congress and which aims to protect consumers from deepfake videos by penalizing platforms that publish them as well as people who create or distribute deepfakes. Where the issue with policing content has historically been the potential contradiction with the first amendment right to free speech, Nkonde says this bill uses consumer rights law to make the case that as consumers, internet users have the right to know the information they’re consuming is valid.

The act, however, is unlikely to pass anytime soon, she admits, in part due to the outlook of the current administration. “We didn’t enter the bill for it to pass but to start a conversation around that technology, and to start educating the American public around deepfakes and the harm that can be done to you as an individual,” she says.

Although that conversation is now gearing up in government – on a state level, Virginia and Texas have also passed bans on doctored footage – there’s still a question around how these laws would be enforced, especially against individuals who are likely to post anonymously. And perhaps the greatest hurdle is the societal factors that are driving the creation of maliciously motivated content – and the receptiveness of people to such content.

“Fake or revenge pornography will be one of the biggest use cases because there’s a profit to be made from people offering creation services, especially as the technology becomes cheaper and more accessible,” says Lyu. “The problem is, digital media will be manipulated no matter what, and most likely for political or financial reasons. Deepfake technology just gives people more ways to do this, more efficiently, and more skillfully.”

[Image center: Video editing on computer via BigStockPhoto ]


Topics

Tips & How-Tos, Cameras and Photography, Photo / Video Sharing, Computers and Software, Internet & Networking, Privacy


Discussion loading

Home | About | Meet the Team | Contact Us
Media Kit | Newsletter Sponsorships | Licensing & Permissions
Accessibility Statement
Terms of Use | Privacy & Cookie Policy

Techlicious participates in affiliate programs, including the Amazon Services LLC Associates Program, which provide a small commission from some, but not all, of the "click-thru to buy" links contained in our articles. These click-thru links are determined after the article has been written, based on price and product availability — the commissions do not impact our choice of recommended product, nor the price you pay. When you use these links, you help support our ongoing editorial mission to provide you with the best product recommendations.

© Techlicious LLC.