deepfaked bill hader

Deep in the Fakes: Telling Deepfakes From Real Takes

This video is one of the most iconic deepfakes out there. It’s likely that you’ve seen this video somewhere on the internet, and were probably freaked out by Bill Hader turning into Tom Cruise and Seth Rogen. When this video first came out on August 6, 2019, there weren’t many deepfake videos around, and it was incredibly difficult to get a deepfake made. This has definitely changed since then, and there are tons of apps and programs out there that help people make their own deepfakes easily.

First Things First

In order to understand what a deepfake is, first you need to understand a few things. Machine learning is how computers learn through experience, artificial neural networks are computer networks set up to work like neurons in your brain, and representation learning involves showing a neural network techniques that allow it to detect patterns in future data. Deep learning algorithms are a combination of all three of these. They’re a more narrow subset of machine learning that’s based on neural networks and uses multiple layers of representation learning. That’s where the “deep” comes from.

What Is a Deepfake?

A deepfake is a video that takes an image of person A’s face and puts it on a video of person B’s body, sometimes with the same audio as the original video (as seen in Bill Hader Channels Tom Cruise), or with different audio that the video has been made to match up with (as seen in You Won’t Believe What Obama Says In This Video 😉). The name originates from how these videos are made. A deep learning algorithm takes footage of the intended target A and is trained for hours on end to see how the target looks from several angles, and in several angles of light. This is then combined with footage of target B using graphical editing skills to superimpose target A’s face onto target B’s body. 

Source: https://whatculture.com/film/10-great-movie-characters-ruined-by-awful-cgi?page=6
This is a rather infamous instance of a predecessor to the deepfake, a lot of people were put off by how “unreal” Jeff/Clu looked.

Why Use Them?

Deepfakes and their predecessors are primarily used by movie studios to resurrect actors (like Paul Walker for Furious 7) or to make them appear younger (like Jeff Bridges as Clu in Tron: Legacy). I could go into a whole spiel as to why I personally think the CGI in Tron: Legacy worked, but that’s for another time. It takes a studio full of people an entire year to complete these projects and bring them to life, and as you can see, they aren’t always perfect. This kind of technology used to be limited to those in the industry, but as the years have gone on, it’s gotten more accessible, and easier to use. Currently, they’re mainly used for entertainment or porn, but there is always a threat that they could be used to impact an election or another important event.

How Can I Spot a Deepfake?

This is really hard. There are a lot of people that combine both the deepfake generating AI with the AI that can detect whether it’s a deepfake or not. This is where humans come in. At that point, it’s up to the human brain to detect if something doesn’t look right.

Things to keep an eye out for:

  • stray hairs
  • eye movements
  • if the teeth don’t quite match or look real
  • if the mouth doesn’t match the words
  • and most importantly, the emotion of the video

These AIs don’t have the capabilities to match up emotions to what’s being said or how it’s being said, which often leads to videos that sound incredibly emotional but don’t look to be at the same level. Like I mentioned in my post about ‘photoshopped’ images, you should also be wary of sensational topics. A lot of deepfaked videos take advantage of these topics and prey on polarizing events.

What Can We Do?

There have already been a couple of efforts made to block deepfakes from affecting elections. AB 730 intended to bar the creation of videos that intended to impact a candidate’s campaign either negatively or positively within 60 days of an election, which isn’t really enough. This law will no longer be in effect after Jan 1, 2023, but it’s a good start. Texas, Virginia, and California have all three criminalized deepfake “revenge porn”, but these laws only are helpful if the perpetrator lives in one of these states. It’ll be very hard to moderate the upload of these videos, considering the source code for the AI is on GitHub, but there’s already another AI that’s being trained to spot deepfake videos.

What’s Next?

Currently, the United States is leading the “attack” on deepfakes. However, there’s little to show that these laws are being carried out correctly or are even enforceable. As Sally Adee says in an IEEE Spectrum article, “it’s hard to make deepfake detectors that are not immediately gamed in order to create more convincing deepfakes”. In my opinion, we should be creating more of these AIs and help them learn more and faster to keep an eye out for anything that doesn’t look quite human, and to be able to work faster than those trying to game the system. Sally also mentions a couple of programs called Reality Defender and Deeptrace that hope to act as a deepfake filter to keep these videos off of your feed. YouTube also made a statement back in February that they will not allow deepfake videos related to the 2020 Census, election, or voting on their platform, which is another big step forward. All we can hope for now is that more companies take charge of this and give us more ways to avoid being tricked by these videos.

Information Sources:

https://www.techlicious.com/tip/how-to-spot-a-deepfake-video/
https://spectrum.ieee.org/tech-talk/computing/software/what-are-deepfakes-how-are-they-created
https://www.defenseone.com/technology/2020/08/deepfakes-are-getting-better-easier-make-and-cheaper/167536/
https://www.dwt.com/insights/2019/10/california-deepfakes-law

8 Replies to “Deep in the Fakes: Telling Deepfakes From Real Takes”

  1. Oh, wow! I had no idea. With technology like this, people will have a lot of trouble knowing what is real. Also, as fragmented as the internet is, a deepfake could circulate before anyone with the skill to recognize it became aware.

    In out current geopolitical climate, there are so many people trying to influence elections. A deepfake could be made by a foreign power. Then, it could circulate for awhile without anyone knowing.

    This is what conspiracy theories are made of. Scary!

  2. Great topic, interesting read. This sort of technology could certainly be dangerous in the wrong hands. While I acknowledge regulation would be very difficult, it seems imperative that some sort of wide-spread policy and anti-deepfake systems be in place.

  3. Wowzers! You’ve done an amazing job of creating a deep fake PSA. I didn’t know there were so many ways to identify fake images and videos besides just doing research.

    Great, informative post!

    1. This is such an important article! I had never seen the video of the Hader/Cruise/Rogen mishmash, that was cool. I have to mention that I saw a picture the other day that looked so real to me, but of course, when I did a little research, I discovered it was completely fake! The sad thing is that it was of a politician that many people don’t like, so she is a constant target of things like this. I don’t understand how people live with themselves after potentially ruining someone’s reputation, career, relationships, and/or mental health. I was going to leave a link to the picture, but I decided that I would potentially be “part of the problem” by spreading such fake images, even though I clearly stipulated that it was fake. Sometimes, people look at an image and don’t read the associated text, or sometimes they just really WANT to believe the crap, so I will skip linking to the picture. But, I certainly fell for it and assumed it was real at the time. The scary thing is that this tech is only going to get more convincing over time. Thanks for educating us more on the subject!

  4. Really great topic. I think deepfakes needs to be watched as closely as possible. The damage that they can do if they go unchecked is a scary thing to think about. Regulation is going to be an extremely difficult thing. I sure as heck don’t have a great idea for that, but I do feel nervous about deepfakes. As technology becomes more sophisticated, the destruction will become more rampant.

  5. This was such a cool post to read! I’d never seen that video before, but I watched it before reading the rest of the blog. I thoroughly enjoyed the video, so I was set up to enjoy the reading! It’s very neat that technology has come this far – so far that deepfakes are a real thing that can be cool and fun, or destructive and misleading. I don’t know if I’ve ever been fooled by a deepfake, but I can easily see how many others could be.

  6. Great post! As technology getting advance, its getting hard to know what’s real and what’s not. people can use this type of technology in a wrong and dangerous way. I don’t even know how many times I have been fooled by deepfakes. I really like how you have point out things when identifying deepfakes. It will help a lot. Very informative post!

  7. This was a fun read. I have heard this before but never heard of the term “Deep Fake”. It is crazy how much better they are starting to look. The Jeff Bridges one in Tron had what I refer to as dead eyes… The eyes look so life less that you can tell. The movement also looks off, which is what stood out with Peter Cushing in Rogue One.

    Your list on how to spot them was fun to read. I never really paid attention to the stray hair issue because the eyes, emotion and movement always stood out the most to me.

    With Hollywood wanting to do more remakes and avoid recasting, I feel like we will see more of these face swaps. I enjoyed Doctor Sleep because they went out of their way to avoid using Deep Fakes.

Comments are closed.