Der Wollelefant sieht aus wie echt, ist aber durch eine künstliche Intelligenz erzeugt. © Hugging Face

Deep fakes How artificially generated images give themselves away

Humans often have no chance whatsoever of distinguishing artificially generated images, audios and videos from real ones. This is why researchers are currently working on automated recognition solutions.

All it takes is a simple text command: in no time at all, artificial intelligence can generate an image that looks like a real photo and is indistinguishable from it to human eyes. Fascinating though it is, it essentially casts doubt on the authenticity of every image. For his PhD thesis at the Faculty of Computer Science at Ruhr University Bochum, Jonas Ricker has specialised in the technical recognition of fake images. He’s looking for ways to distinguish artificially generated pictures and videos from real ones. Rubin, the science magazine of Ruhr University Bochum, Germany, features an article on his project.

To Gaussian noise and back again

The so-called diffusion model for image generation is currently very popular due to the Stable Diffusion application: “The underlying principle may at first sound surprising,” says Ricker: “A real image is diffused step by step by successive addition of Gaussian noise – hence the name. A few hundred steps later, the image information is completely removed and the image is nothing but noise. The purpose of the model is now to reverse this process in order to reconstruct the original image – a tricky challenge.” The key is not to predict the image directly, but to process it step by step, in the same way that the noise had been added. Armed with a sufficient amount of training data, the model can learn to make a noisy image a tiny bit less noisy. Through repeated application, completely new images can then be created from random noise.

Exposing fake profiles on social media

“Already, the diffusion model delivers very good results in generating deceptively real images and it will continue to improve going forward,” believes Jonas Ricker. This will make it even more difficult to distinguish real images from artificially generated ones. Ricker is currently testing various approaches for distinguishing images generated by the model from real photos. The distinction between real and fake images is important not only to expose fake news, for example such that are delivered in the form of videos, but also to expose fake profiles on social media. Such profiles are used on a massive scale to manipulate the public opinion on political issues, for example. “This is exactly what the CASA Cluster of Excellence is about: exposing large-scale attackers such as governments or intelligence services that have the means to use deep fakes to disseminate propaganda,” concludes Jonas Ricker.

Detailed article in science magazine Rubin

You can find a detailed article on this topic in the science magazine Rubin, special edition IT Security. For editorial purposes, the texts on the website may be used free of charge provided the source “Rubin – Ruhr-Universität Bochum” is named, and images from the download page may be used free of charge provided the copyright is mentioned and the terms of use are complied with.

Press contact

Jonas Ricker
Faculty of Computer Science
Ruhr-University Bochum
Germany
Phone: +49 234 32 23486
Email: jonas.ricker@ruhr-uni-bochum.de

Published

Friday
23 June 2023
7:36 am

By

Meike Drießen (md)

Translated by

Donata Zuber

Share