Deepfake: typologies and reflections, deep learning and GANs

  • ML cube
  • Insight
  • Deepfake: typologies and reflections, deep learning and GANs

This article wants to propose some examples of Deepfake with typologies and reflections on deep learning and GANs.

Try to imagine a speech by an extremely vulgar political candidate of hate speech that is spread online and made viral by online communities on social media.

A fictional but realistic scenario thanks to AI and Synthetic Media technologies also known as Deepfake.

 

What are Deepfakes?

The term Deepfake embraces a wide variety of content and refers to a technology that derives from Artificial Intelligence.

It can be defined as the set of images and videos that have been altered in order to bring the resemblance of someone else.

However, this definition deserves to be deepened.

Deepfake can be defined as all those video or audio contents that are modified by replacing the face, voice or both with those of other people. More generally, Deepfakes are described as synthetic media content, hence the term “fake”, generated by Deep Learning techniques, hence the term “deep”.

For the average user, the difficulty in creating Deepfakes can vary: some Snapchat filters allow you to replace your own face with that of another, for example.

However, this type of content is as easy to create as it is unconvincing.

The most realistic Deepfakes require much more efficient technologies such as Machine Learning and Neural Networks: in 2014, in fact, some researchers led by Ian Goodfellow managed for the first time to generate hyper-realistic faces thanks to the “generative adversarial networks” also known as GANs.

You may also be interested in: Artificial Neural Networks to Understand the Functioning of the Mind.

 

DEEPFAKE: TYPES

FACE-SWAPPING

As mentioned above, face replacement techniques have long been popular on social media such as Snapchat or Instagram.

However, the output of these tools are of low quality and used mainly for recreational purposes.

 

PUPPETEERING

Also known as “Full Body Deepfakes“, the Puppeteering technique consists of rendering the characters’ corporealities entirely in 3D by making them perform actions orchestrated by AI, just like the puppets moved by the puppeteer.

For example, the Japanese firm of AI Data Grid has created an AI engine that can automatically generate virtual models for the advertising and fashion industry.

 

LIP-SYNCH

The lip sync technique consists of rendering mouth movements and facial expressions to make the virtual character say certain sentences with the right tone.

Exemplary is the case of Jordan Peele who used this technique to broadcast an Obama video.

 

VOICE CLONING

In this case, reference is made to a Deep Learning algorithm which, starting from some voice recordings of a person, is able to create a synthetic voice that corresponds to the original.

In this way, whole speeches can be generated with the newly generated artificial voice.

Among the most popular services we find Microsoft Custom Voice or iSpeech.

 

IMAGE SYNTHESIS

Image generators are now widespread online.

They employ techniques of Computer Vision, Deep Learning and GANs (Generative Adversarial Networks) to synthesize new images of any kind: from portraits, such as the ThisPersonDoesnotExist.com tool which is able to create photographic realism faces of people who do not exist, or NVIDIA Gaugan, capable of generating entirely new landscapes with simple user brush inputs with a mouse.

 

TEXT GENERATION

Text generators are AI and Deep Learning engines that can automatically generate texts, stories, poems and poems.

A relevant example is OpenAI’s GPT-3 capable of generating, in addition to text documents, even guitar tabs or programming code.

 

SOMETHING IMMORAL OR CREATIVE AND INNOVATIVE?

In defining whether something is good or bad, first of all, the intention behind the production of content such as Deepfakes should be considered.

Surely, on the one hand this type of technology has a “positive” side, for example with the possibility of having fun with the filters that the platforms make available.

The @deeptomcruise case may be different: the filter shared on TikTok appeared in which some viral videos where the actor’s face is placed on the creator’s body with a very high level of realism.

TikTok is taking action to implement policies against the production of entirely synthetic or manipulated content that can alter the user’s perception of real facts.

In this case, however, the account that shared those videos did so with the simple intention of demonstrating what technology is capable of doing for entertainment purposes only.

Unfortunately, however, there are also incorrect or immoral uses of Deepfake.

In fact, the contents in which the faces of celebrities appear in films for adults are widespread, or, in the case in which one wants to lead to disinformation by generating false information and opinions, videos can be generated in which political candidates appear who propose arguments that in reality are have been extensively modified by technology but, at the same time, appear extremely realistic.

 

CONCLUSIONS

We have seen how Deepfakes and Deep Learning and GANs technologies can be used in the most diverse fields: technology is able to create possibilities and opportunities for anyone regardless of their language.

In fact, the benefits and facilities that this type of technology can bring include education, accessibility, film production or individual artistic expression.

Like any new technology, however, there are always those who take advantage of it and try to take advantage of it at the expense of others. In fact, Deepfakes can be used to create disinformation and used as a communication weapon, damaging people’s reputations, affecting the security and political stability of a country.

The most refined Deepfake contents are able to bypass AI-based detection and control systems and this capability progressively improves more and more following the introduction of new detection methods.

In fact, the AI ​​detection methods have the objective of solving the problem in the short term, expecting that, with the advancement of authentication and provenance control techniques, they can pose a “long term” solution to the Deepfake problem.

 

Resources:

https://towardsdatascience.com/technical-countermeasures-to-deepfakes-564429a642d3

https://towardsdatascience.com/positive-use-cases-of-deepfakes-49f510056387

https://towardsdatascience.com/ai-generated-synthetic-media-aka-deepfakes-7c021dea40e1#_ftn4