“The idea that someone could put another person’s face on an individual’s body, that would be like a homerun for anyone who wants to interfere in a political process. This is now going to be the new reality, surely by 2020, but potentially even as early as this year.” — Senator Mark Warner (D-VA)
New technology with the ability to create hyper-realistic fake videos has the potential to wreak havoc on the political landscape, lawmakers and technology experts say.
The tech allows people’s faces to be superimposed onto different bodies in other videos. Different technology can also allow allow facial expressions to be altered. Adobe even has a program to create new audio from text.
Currently, fake video technology requires manipulation of existing video footage of a person, and cannot create fake video from scratch with just a picture.
Right now, the technology is not that widespread and can still be detected by experts. But it is improving rapidly.
Such videos have already caused controversy. Fake pornographic videos have been made by face-swapping celebrities into pornographic movies. In February 2018, the popular content sharing website Reddit banned the r/deepfakes subreddit, which had been used to share fake pornographic content featuring celebrities. Reddit updated its rules prohibiting sharing pornographic content of someone without that person’s consent to include faked images. In February 2018, the pornographic website Pornhub banned deepfake videos from its platform.
The combination of the different emerging technologies means it is highly likely we will soon see videos of public figures saying and doing things which never happened, that are all but indistinguishable from the real thing.
The national security implications for terrorist groups using this technology are very worrying.
They will be able to create images of politicians announcing strikes that never happened, announcing anti-Muslim policies that don’t exist, making racist and bigoted remarks they never said or even footage of war crimes that never took place.
As Lawfare blog writes “The spread of deep fakes will threaten to erode the trust necessary for democracy to function effectively, for two reasons. First, and most obviously, the marketplace of ideas will be injected with a particularly-dangerous form of falsehood. Second, and more subtly, the public may become more willing to disbelieve true but uncomfortable facts.”
The deepfakes trend takes existing problems with fake news to the next level. As fake news spreads, the public will be less and less inclined to believe what they see, hear and read, and more inclined to rely on tribal in-groups and partisan sources they trust to support their specific narratives and interests.
Terrorists can and probably will manipulate these passions in three main ways:
- Faking anti-Muslim hate crime and anti-Muslim bigotry in order to sow distrust between communities. It achieves this goal by making Muslims more afraid of non-Muslims, and by making non-Muslims less likely to trust stories of anti-Muslim bigotry.
- Faking terrorist attacks and making sophisticated threats to spread fear and reduce our ability to respond appropriately to genuine danger.
- Faking news reports or other information about genuine terrorist attacks, in order to increase confusion and put more lives at risk (ie falsely giving information that a suspect has been subdued when in fact there are more gunmen at large).
Deep fake videos are “like a weapon of mass destruction in the world of fake news and extremist propaganda, especially for hostile intelligence agencies engaging in political influence operations,” Clarion Project National Security Analyst Ryan Mauro said.
Yet despite the risks, the technology is ploughing ahead with no signs of slowing. The website deepfakes.club offers tutorials to anyone with an internet connection on how to create fake videos.
Nor are government attempts to develop reliable ways of authenticating content likely to be effective.
“We all will need some form of authenticating our identity through biometrics. This way people will know whether the voice or image is real or from an impersonator,” Congressman Ro Hanna (D-CA) told The Hill. He called on the military’s research and development wing, Defense Advanced Research Projects Agency (DARPA), to create secure authentication techniques.
Yet the lighting-quick development of digital technologies suggests any such process would become liable to manipulation. It even opens up the possibility of a fake video being stamped with a real mark of authentication.
“Any technology that will allow you to fingerprint, the adversary is going to figure out how to take it out, manipulate the content, and then put that fingerprint back in,” Dr. Hany Farid, a computer science professor at Dartmouth College, who specialises in digital forensics told ABC News. “That is almost guaranteed.”
Cybersecurity experts are working on possible solutions to the coming threat, but so far have not agreed on a viable path forward.
If you’d like to comment on this item, please email us at [email protected] and enter Fake Video in the subject line. Please let us know if you’d rather we publish your comments anonymously.