News| Cybercrime | Attack Vectors
This news has made waves. Kiev’s mayor Vitali Klitschko calls Berlin’s mayor Franziska Giffey. That’s what it looked like at first glance. In reality, however, the video call was fake. At first, it was not entirely clear whether it was a so-called deep fake or a very professionally made montage of different recordings. In the meantime, however, it is known that a Russian comedian duo is responsible for the incident.
Even though this incident is unlikely to be deep fakes, there is a certain risk posed by this technology. In the following article, we explain what deep fakes actually are, how they occur, when they become dangerous and how to recognize them. Enjoy reading.
At the end of June, the mayor of Berlin received a video call from Kiev’s mayor Vitali Klitschko. It was not apparent at first glance that the call was a fake. According to Franziska Giffey, facial expressions and gestures matched. The mouth movements were also correct. She only had doubts because of the course of the conversation. When the topic of holding a Christopher Street Day in Kiev came up, Franziska Giffey suspected that something could not be right and broke off the conversation. The same game was repeated in Budapest, Madrid, Vienna and perhaps in other European cities.
It is now known that a Russian comedian duo is responsible for the fake video calls. They used social engineering methods to gain access to the town hall and to be able to make the video call. Social engineering is about collecting information about people and ultimately using it so cleverly that a relationship of trust is created and sensitive information is passed on. According to the two perpetrators, this procedure is “very simple” and “works every time”.
They do not provide any information about how the video was actually made. The only information they shared about their attack is that it is not a deep fake.
The term “deep fake” comes from English and is made up of the words deep learning and fake. These are fakes that are created with the help of artificial intelligence from images, audio and film recordings. There are several forms of deep fakes, some of which can happen in real time:
It has never been as easy to create deep fakes as it is today: Countless apps and programs such as Reface or DeepFaceLab make it possible to create deep fakes in just a few steps. FakeApp is even available to its users for free, and the Avatarify program allows its users to transform into a completely different person (avatar) and thus appear in real-time in video chats with Skype, Zoom, and Teams.
As mentioned earlier, artificial intelligence is used to create deep fakes. Neural networks and algorithms analyze existing audio, image and video material of the person to be imitated and create new content. The more original material of a person there is, the more realistic the result seems. If the AI-powered applications have material from a person from different situations, perspectives and media formats at their disposal, they quickly learn the essential properties. The AI can then transfer these properties to other materials without changing the environment.

The example of fake mayor calls presented illustrates possible areas of application very well. Even if it is not a deep fake – according to the perpetrators – the motivation is clearly visible. The comedians call it “entertainment” and “entertainment”. The victims, on the other hand, speak of “modern warfare“. That describes the situation very well. Deep fakes are often used for entertainment purposes, as it can be amusing to put well-known personalities in obscure and embarrassing situations.
However, this technique can also quickly turn into the opposite and lead to people being discredited or disinformation and propaganda being spread.
The German government also warns that deep fakes can pose a great danger to society and politics. Especially when it is used to influence political processes and public opinion.
Due to technical developments, it is possible that it is no longer a challenge to create deep fakes, even for laymen and beginners. In addition, the computing power and performance of artificial intelligence are constantly increasing. As a result, the quality of deep fakes improves rapidly. It is becoming increasingly difficult to distinguish truth from manipulation.
Deep fakes will certainly not accompany you in everyday life. The use of deep fakes is indeed very specific and situation-dependent. But especially when political, socio-critical or explosive aspects are addressed and video material is used for reporting, it can be worthwhile to take a closer look. As in many other cases, you should first listen to your gut feeling. If you see a video of a person or listen to an audio recording of them and the behavior or what is said seems strange to you or is completely contrary to the “normal” appearance and behavior of the person, it may be manipulation.
As mentioned, it can be quite difficult to detect deep fakes or forgeries and manipulations in general due to technical progress. However, the following aspects should be noted:
You can read more information about deep fakes here:
Have you already come into contact with deep fakes and would like to share your experiences with us? Please feel free to contact us. We look forward to the exchange.