People easily fall victims of deepfake. For already long fakes have been not limited to appearing in social networks, made-up news, and rumours. Now you can’t be sure literally about anything you see on the screen.
So what is deepfake? The term originated from combining ‘Deep Learning’ and ‘fake’. The second part is clear. But Deep Learning is a type of machine learning that allows you to predict the result from the given input data. Put simply, a neural network simulates actions of a real intellect, drawing conclusions based on certain assumptions, rather than using ready-made algorithms. As you should have guessed, this technology is used to create fake videos. With enough material to learn, a neural network can synthesize an image of the right person, imposing mimicry on real scenes, and achieve a sufficient likelihood.
Similar technologies are used in applications like Face2Face or MSQRD that add on images. This is how already in 2016 it was possible to replace anybody’s facial expressions in real time. Unlike its predecessors, the technology rendered not only lips and cheeks but also eyes and teeth, which made the result much more convincing.
In July 2017 American scientists demonstrated a spectacular use of the technology. Their algorithm imposes a given ton row on a video with an accurate lips synchronization. They selected Barack Obama as an example. A neural network took 17 hours of hd recordings of his speeches to learn. At the time of this video, the scientists used only the words that the ex-president actually uttered and on which the corresponding combinations of facial expressions were recorded. Now technologies have taken a step further, but even the result of 2017 is rather impressive. After all, any words can be taken out of context and arranged in a very interesting way.
However, human nature never changes. With an ability to create content of unbelievable credibility and influence public opinion, a lot of efforts were made to … create intimate videos, where the actors’ faces were replaced by celebrities’ faces and facial expressions. Already in December 2017, such videos simply filled up Reddit. Sometimes it looked indistinguishable from reality, sometimes with sort of eerie bugs (hello, Scarlett Johansson‘s eyebrow). No, we won’t publish it, just keep on living and know they exist. Moreover, there are special services where you can order a video generated according to your preferences, including very specific ones.
According to the author of these very videos on Reddit, he trained his neural network with millions of scenes from the Internet, where the face of the “target” was distorted (by emotion, perspective or poor quality). And then, when the trained AI was given a new person, the system took it for another distorted image, and made them look like the face it had worked with before. So, the more shots with a person you find, the more high-quality imitation will be. And a transition from working with separate shots to changing a video (even in real time) is only a matter of computing power.
And what can be better than replacing all actors in your favorite movie with Nicolases Cages?
Of course, all these inventions can be used for the greater good. To improve the quality of a video, where bad shots could be replaced by a simulation. To take feature films without human actors, with visual effects of any complexity and no outtakes or for VR computer games with incredible detailization.