Tom Cruise looks into his phone camera. He shows a coin: “I’m gonna show you some magic… it’s all real.” A twist of his hand and the coin disappears. Cruise flashes his blinding smile and shrieks his familiar laugh.
This is one of three videos of the star that went viral last week, yet Cruise had nothing to do with the productions. They were deepfakes – videos in which the face of the performer is swapped with another person, using increasingly sophisticated ‘machine learning’ AI. In this case, the impersonator Miles Fisher was filmed by director Chris Ume, before the filmmaker swapped in Cruise’s face. The piece delighted the internet, clocking up 11 million views. Ume’s videos are astonishingly believable when viewed on mobile devices, provoking a debate about how deepfakes might soon break into mainstream film and television.
Channel Four has been an early adopter, surprising us with last year’s Alternative Christmas Message: instead of the likes of Doreen and Neville Lawrence, the Reverend Jesse Jackson or Quentin Crisp – all of whom have delivered messages since the channel initiated the strand in 1993 – this year the Queen herself graced the public with a very alternative message. But her description of 2020 as “a year when most of you, thanks to toilet-roll shortages, have finally understood how it feels to have a predicament on the throne” was deliberately startling, inviting viewers to look closely at the images. This was, in fact, a ‘deepfake’ of Elizabeth II, in which the face of the monarch had been digitally swapped on to that of actress Debra Stephenson.
The intentions of the film’s director, William Bartlett, and Framestore, the VFX company where he produced the short sequence, was to draw attention to a recent leap forward in the ability of digital technologies to manipulate the moving image.
Filmgoers are already accustomed to some extraordinary examples of digital face replacement (DFR), with examples such as the resurrection of Peter Cushing to reprise his role as Grand Moff Tarkin in Gareth Edwards’s Rogue One: A Star Wars Story (2016), in which the late actor’s head was placed on the body of actor Guy Henry.
DFR has been achieved at huge expense with similar processes to those used earlier to turn human performance into CGI characters, including the Andy Serkis-based Gollum and King Kong. Deepfakes change the landscape for DFR, offering a cheaper and more accessible process, so that a phenomenon that has until now been a cinematic wonder, restricted to the spectacle of major studio productions, can now enter the mainstream.
Deepfakes use ‘deep learning’, an artificial intelligence process that mimics the neural networks of the human brain in order to learn from video images fed into it. The deepfaker provides a source video (in the Christmas Message case, Debra Stephenson) and target video (the Queen). In simple terms, the machine learning system first watches and learns, then uses its knowledge to duplicate the facial movements of the actor on the target face. A similar process can be used with audio: the machine learning can listen to voice recordings of a person and then reproduce those speech characteristics in lines scripted by the filmmaker.
Deepfakes first emerged in 2017, when a Reddit user employed the method to replace faces in pornographic films with those of his favourite stars, including Gal Gadot and Maisie Williams. The resulting controversy stimulated important debates about abuse, gender and image rights in the online film industry.
Those deepfake videos had very poor image quality, but because they came from the world of user-generated, open-source IT development, the machine-learning techniques were shared widely. Rapid progress was made to the point where, today, users can easily make simple deepfakes using applications such as Face Swap and DeepFaceLab.
Separate from the malign world of ‘morph porn’, the origin of deepfakes as a screen subculture has led to the creation of two clear strands of video output: artist provocateurs and playful satirists.
Bill Posters is the pseudonym of Barnaby Francis, a British artist, activist and writer whose intention is to create viral deepfake artworks “to subvert the power of the Digital Influence industry”. Together with the American artist Daniel Howe he created Spectre, an installation film artwork which featured at the 2019 Sheffield Doc/Fest.
The work became internationally renowned for its deepfake Mark Zuckerberg, showing how controversies over digital face replacement can be used to highlight cultural and political issues. In their 2019 deepfake, Zuckerberg admits to the deep cynicism that lies behind his business: “Imagine this for a second: one man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures… whoever controls the data controls the future.”
The deepfakers uploaded their film to Instagram – which is owned by Facebook – issuing a freedom-of-speech challenge to Zuckerberg. This was well-timed, coming soon after Facebook refused to remove a manipulated video of Nancy Pelosi that made her seem ill or drunk.
Posters has worked on further deepfakes that critique celebrity culture, in which Kim Kardashian, Morgan Freeman and others declare their devotion to a fictitious data control organisation, Spectre. (In 2020 the BFI National Archive formally acquired Bill Posters’ work as a leading example both of how filmmaking technologies are changing, and of the power of the moving image as a tool of disinformation.)
Francesca Panetta sees her project In Event of Moon Disaster (2020) as not just a provocation, but as a civic project to raise awareness of the developments in moving-image manipulation. In 1969, Richard Nixon’s speechwriter William Safire wrote a speech for the president in case the Apollo 11 astronauts were to die on the Moon. Fifty years on, Panetta uses deepfake technology to adapt TV footage to bring the speech to life, in an utterly believable film reporting the deaths of Neil Armstrong and Buzz Aldrin.
An immersive artist in documentary and journalism, Panetta’s work is experimental and at the forefront of research – she produced the film while working at MIT’s Centre for Advanced Virtuality. Her interest is in how new technologies such as deepfakes enable the manipulation of the film and television archive, but sees the technique as part of bigger developments in moving-image culture.
Messing with the online archive of film classics is a favourite pursuit of deepfakers, in particular the swapping of the faces of stars in their favourite films. Users of FakeApp have a predilection for recasting Nicolas Cage, who appears on YouTube in multiple roles, including Lois Lane and Indiana Jones; in similar vein, Mr Bean takes over from Anthony Hopkins as Hannibal Lecter.
These face replacements extend the fan culture surrounding mainstream movies, demonstrating the makers’ love for Hollywood cinema even as they lampoon it. Their deepfakes are far from convincing, the makers seeming to exult in the rough-and-ready amateur quality.
But other producers have begun to professionalise this playful subculture. In Los Angeles, the company Corridor Crew uses high-end computer processors to show-off the potential of deepfakes, with online videos in the form of highly engaging ‘making of’ productions that describe and celebrate the team’s geeky DF virtuosity.
In We Made the Best Deepfake on the Internet, they show how they created a mock video featuring a surprise visit by Tom Cruise to their studio. And their highly successful mock actuality footage of Keanu Reeves (Keanu Reeves Stops a Robbery) cleverly adopts a camera style that hides the technical imperfections of deepfakery. This film’s 15 million YouTube views demonstrate the Corridor Crew’s ability to monetise their DF work through comedies that engage their audience’s love of cinema.
Impact on the film industry
Deepfakes are seen by those close to the studio film industry as a means of doing more VFX, cheaper and faster. In Los Angeles, Corridor Crew’s Niko Pueringer declares, “It represents a fundamental shift in how visual effects are going to be done in the industry. We’ve reached a point where AI and machine learning can potentially let anyone create a shot that before then has been impossible.” But this technological exuberance has opened the door for movie industry chancers who underestimate our complex reactions to deepfakes.
In November 2019, Anton Ernst and Tati Golykh stirred a huge controversy by casting a deepfake James Dean for their movie project Finding Jack: ‘‘We searched high and low for the perfect character to portray the role… after months of research, we decided on James Dean,” they told the Hollywood Reporter.
The reaction of critics and fans was disgust. The project, originally slated for release in 2020, sank and today gets no mention on the website of Ernst and Golykh’s company Magic City Films. But they had in effect run a deepfake up the flagpole to gauge public and critical reaction to the possibility of film star resurrection.
Others see a more wide-ranging cultural impact that opens new possibilities for the role of the moving image. Panetta describes her work as exploring “the ambiguity between reality and non-reality, and how truth and fiction blur”. Discussing the alternative, tragic version of the Apollo 11 mission presented in her deepfake video, she suggests, “It’s documentary-like, almost magical realism. It’s speculative histories/futures.”
For Panetta, the new technologies of image manipulation could allow us to represent alternatives to traditional historical narratives – the moving-image equivalent of replacing the statue of the Bristol slave trader Edward Colston with forgotten figures from that period of history.
Ethics and the law
The creation of deepfakes exposes major ethical dilemmas for filmmakers, as well as challenges to the law. Experimental projects such as Virtual Maggie, a sequence that seeks to digitally resurrect Margaret Thatcher for a new period movie set in 1989 that I am developing, deliberately focus attention on the need to protect reputations.
While living film stars and celebrities can use defamation law to protect against the abuses of deepfakers, the UK has no legal protection for an individual’s image after their death, so deepfakes of the deceased are an ethical and legal no man’s land.
Copyright protection for the owners of film footage manipulated by deepfakers is a further problem. The machine learning process does not involve the copying of faces from existing film – instead, the AI watches and learns from thousands of frames – so current copyright law cannot protect rights holders.
The film industry has only recently begun addressing these ethical and policy issues, with a new network of Film and TV industry figures assembling this year to debate the problems. The context may be a dangerous one for filmmakers. A deepfake ‘moral panic’ has convulsed the political world, following DF videos of politicians delivering speeches that they never made. Severe legal restrictions on deepfakes are already being mooted. There are clear ethical dangers: as the deepfake Tom Cruise tells us ominously in Chris Ume’s video, “If you like what you’re seeing, just wait till what’s coming next.” The issue is whether new laws can avoid stifling the positive creative potential of deepfakes while protecting individuals from abuse.
Sign up for Sight & Sound’s Weekly Film Bulletin and more
News, reviews and archive features every Friday, and information about our latest magazine once a month.
Art within the machine: how machinima turns the camera on videogames
People having been making movies out of videogame graphics since the technology first allowed players to press record. Whether fan homages shared online, pointed critiques or gallery pieces, these works are part of a thriving and fast-evolving practice, writes Matt Turner.
By Matt Turner