The 10th annual Vidcon took place in California this July. It’s the worlds biggest conference of online video and digital creators, where the people making the ‘most exciting’ online content get together and talk about what’s next for the industry. The first topic on their agenda? A presentation on the risks of deepfakes.


A horrific AI-assisted face swapping app which takes someones face and places it on someone else's body. Particularly great if you're a creep imaging what your favourite celeb-crush looks like naked.” Urban Dictionary

The phrase “Deepfake” was coined in 2017 to describe videos altered by AI technology. Essentially, deepfake videos are fake pieces of content created by using technology that enables footage of real people to be manipulated. They show people doing things and saying things that they never did. How does it work? Machine learning.

“It’s first trained with a target face. Distorted images of the target are run through the algorithm and it learns how to correct them to resemble the unaltered target face. When the algorithm is then fed images of a different person, it assumes they’re distorted images of the target, and attempts to correct them” - Howtogeek.

It’s now getting easier and easier to create deepfakes - Samsung researchers even recently developed an algorithm that only needs one source image to create fake videos (they were even able to animate paintings and famous portraits, with eerie results). From porn to politics, deepfakes have become a popular & defined genre of online content. They are hyper-realistic videos that do a good job at making us believe that we’re watching is real - not computer programming.


Yes, video manipulation has existed for years - but these videos take it to a whole new level. Of course, the main glaring problem with deepfakes is that they are challenging the concept of truth. However, according to Sam Gregory (a program director at the human-rights organization Witness - AND who led the presentation entitled "Deepfakes and Synthetic Video: Don't Panic, Prepare” at Vidcon), one of the most damaging aspects to deepfakes currently isn't disinformation, but how deepfakes can allow people to become more entrenched in the notion that anything they don't want to believe is fake. If you want to believe something (or don’t want to) fake news can reinforce it. And, of course, the other current worry with deepfakes is that the innovation of its software is outpacing developers trying to detect it.

One of the most infamous examples of a 'deepfake' video is a video of House speaker Nancy Pelosi that was altered to make her look and sound drunk. It was widely distributed and thought to be real. Trump reposted it and has been viewed more than 3.5 million times on Twitter. Now, deepfakes have even been cited as a potential nuclear threat. However most deepfakes that are seen by young people today on platforms like YouTube, Reddit or Facebook are just for fun - and are generally harmless, like this brilliant one of “The Shining” with Jim Carey, or clips that put Elon Musk's face onto a baby.

Obviously there are serious implications from deepfake for us all, but youth are particularly vulnerable to its effects due to the online worlds they inhabit daily. Despite the fact that they are becoming more educated on spotting these things, even experts find it hard to spot contemporary fakes:

In the past it would've been fairly easy to spot that a face had been swapped in video: simple things like inconsistent colour, lighting inconsistencies and mistakes when objects passed in front of the subject. These videos would've always been suspiciously 'front on' because editors would overlay the face of one video on another. Now, however, deepfakes created using AI can render a face at any angle, recognise when objects pass in front of a face and match colour, lighting and other telltale signs of fakery expertly - even to the point where professionals have difficulties spotting fakes. There are ways, of course. Solar curves and increasing contrast in video and image editing applications can still reveal what was there originally and what has been added - but who has time to verify every bit of content they see online?” Kevin Goss-Ross, Creative Director, Film & Photography, Thinkhouse.


Farewell Bottle Cap challenge, hello FaceApp.

“FaceApp” has officially gone viral. We've seen newsfeeds full with young people posting altered photos of themselves using the app. It gives people the power to change their facial expressions, looks, gender and now, their age. It’s reportedly been the top-ranked app on IOS in 121 countries. Why the virality? Its new ‘Golden Age’ filter was of particular interest to young people this week, who, at the click of a button, could see themselves aged as ‘senior citizens’ through AI. The filter garnered universal appeal from younger generations, offering them a (scarily accurate) glimpse into their future.

“It was frighteningly real. There’s also an element of zeitgeist involved. It’s the same with face swap filters, or Boomerangs or any new tech features that come out on social media that lets people have a laugh and connect. Lots of my friends didn’t really care about the data warnings. There was a sense that someone has their information somewhere already.” Finn, 26

A host of celebrities also joined in - which spurred interest further. One of the more noteworthy ones was by 22 year old musician Lewis Capaldi, whose ‘Golden Age’ photo looked remarkably like Beatle Paul McCartney. Some young people have even used the hashtag #FaceApp to drive conversation about the future they are facing in the context of climate action. However, as with any massive online craze involving data sharing, many are now reporting serious privacy concerns due to their data policy. More than 150 million people have used the app.


Despite the fact that ‘fake news’ and truth have been the subject of debate for some time now, AI developments like these mean that the theme is still a critical one, top of mind for many. ‘Reality’ has become a subjective & highly valued concept.

Because youth spend so much time online, they are likely to be consuming various forms of fake content on any given day. Being visually literate is of critical importance. Investment in quality visual assets is essential.

A lot of ‘fakery’ is in good fun. Young people really value the creative possibilities surrounding new technology. Despite youth’s concerns about data and privacy, it seems that (for many) often entertainment comes first. Want to spark some new ideas? Have some fun imagining your product or service in alternative realities.

See also


Amidst the threatened breakups of tech monopolies and burgeoning trade wars, the Venn diagram intersects on the subject of privacy. It's a couple of weeks...


Remember the floss? The dab? The Drake-inspired In My Feelings Challenge? Well, there’s a new viral dance trend in town capturing the internet’s attention: the...