Sie haben die scheinbar unvereinbare induktive Vorgehensweise von maschinellem Lernen mit deduktiver Logik zusammengebracht: Stephanie Schörner, Axel Mosig und David Schuhmacher (von links). © RUB, Marquard

Bioinformatics How artificial intelligence can explain its decisions

Today, when an algorithm identifies a tumour in a tissue sample, it doesn’t reveal how it arrived at this result. And that’s not very trustworthy. Researchers in Bochum are therefore pursuing a new approach.

Artificial intelligence (AI) can be trained to recognise whether a tissue image contains a tumour. However, exactly how it makes its decision has remained a mystery until now. A team from the Research Center for Protein Diagnostics (PRODI) at Ruhr-Universität Bochum is developing a new approach that will render an AI’s decision transparent and thus trustworthy. The researchers led by Professor Axel Mosig describe the approach in the journal “Medical Image Analysis”, published online on 24 August 2022.

For the study, bioinformatics scientist Axel Mosig cooperated with Professor Andrea Tannapfel, head of the Institute of Pathology, oncologist Professor Anke Reinacher-Schick from the Ruhr-Universität’s St. Josef Hospital, and biophysicist and PRODI founding director Professor Klaus Gerwert. The group developed a neural network, i.e. an AI, that can classify whether a tissue sample contains tumour or not. To this end, they fed the AI a large number of microscopic tissue images, some of which contained tumours, while others were tumour-free.

“Neural networks are initially a black box: it’s unclear which identifying features a network learns from the training data,” explains Axel Mosig. Unlike human experts, they lack the ability to explain their decisions. “However, for medical applications in particular, it’s important that the AI is capable of explanation and thus trustworthy,” adds bioinformatics scientist David Schuhmacher, who collaborated on the study.

AI is based on falsifiable hypotheses

The Bochum team’s explainable AI is therefore based on the only kind of meaningful statements known to science: on falsifiable hypotheses. If a hypothesis is false, this fact must be demonstrable through an experiment. Artificial intelligence usually follows the principle of inductive reasoning: using concrete observations, i.e. the training data, the AI creates a general model on the basis of which it evaluates all further observations.

The underlying problem had been described by philosopher David Hume 250 years ago and can be easily illustrated: No matter how many white swans we observe, we could never conclude from this data that all swans are white and that no black swans exist whatsoever. Science therefore makes use of so-called deductive logic. In this approach, a general hypothesis is the starting point. For example, the hypothesis that all swans are white is falsified when a black swan is spotted.

Activation map shows where the tumour is detected

“At first glance, inductive AI and the deductive scientific method seem almost incompatible,” says Stephanie Schörner, a physicist who likewise contributed to the study. But the researchers found a way. Their novel neural network not only provides a classification of whether a tissue sample contains a tumour or is tumour-free, it also generates an activation map of the microscopic tissue image.

The activation map is based on a falsifiable hypothesis, namely that the activation derived from the neural network corresponds exactly to the tumour regions in the sample. Site-specific molecular methods can be used to test this hypothesis.

A neural network is initially trained with many data sets in order to be able to distinguish tumour-containing from tumour-free tissue images (input from the top in the diagram). It is then presented with a new tissue image from an experiment (input from the left). Via inductive reasoning, the neural network generates the classification “tumour-containing” or “tumour-free” for the respective image. At the same time, it creates an activation map of the tissue image. The activation map has emerged from the inductive learning process and is initially unrelated to reality. The correlation is established by the falsifiable hypothesis that areas with high activation correspond exactly to the tumour regions in the sample. This hypothesis can be tested with further experiments. This means that the approach follows deductive logic. © PRODI

“Thanks to the interdisciplinary structures at PRODI, we have the best prerequisites for incorporating the hypothesis-based approach into the development of trustworthy biomarker AI in the future, for example to be able to distinguish between certain therapy-relevant tumour subtypes,” concludes Axel Mosig.

Funding

PRODI is funded by the Ministry of Culture and Science of the State of North Rhine-Westphalia (grant no. 111.08.03.05-133974). The Federal Ministry of Education and Research funded part of the study within the framework of the “Slide2Mol” project under grant no. 031L0264.

Original publication

David Schuhmacher et al.: A framework for falsifiable explanations of machine learning models with an application in computational pathology, in: Medical Image Analysis, 2022, DOI: 10.1016/j.media.2022.102594

Press contact

Prof. Dr. Axel Mosig
Bioinformatics Competence Area
Center for Protein Diagnostics
Ruhr-Universität Bochum
Germany
Phone: +49 234 32 18133
Email: axel.mosig@rub.de

Download high-resolution images
Der Download der gewählten Bilder erfolgt als ZIP-Datei. Bildzeilen und Bildnachweise finden Sie nach dem Entpacken in der enthaltenen HTML-Datei.
Nutzungsbedingungen
Die Verwendung der Bilder ist unter Angabe des entsprechenden Copyrights für die Presse honorarfrei. Die Bilder dürfen ausschließlich für eine Berichterstattung mit Bezug zur Ruhr-Universität Bochum verwendet werden, die sich ausschließlich auf die Inhalte des Artikels bezieht, der den Link zum Bilderdownload enthält. Mit dem Download erhalten Sie ein einfaches Nutzungsrecht zur einmaligen Berichterstattung. Eine weitergehende Bearbeitung, die über das Anpassen an das jeweilige Layout hinausgeht, oder eine Speicherung der Bilder für weitere Zwecke, erfordert eine Erweiterung des Nutzungsrechts. Sollten Sie die Fotos daher auf andere Weise verwenden wollen, kontaktieren Sie bitte redaktion@ruhr-uni-bochum.de

Published

Friday
02 September 2022
9:34 am

By

Julia Weiler (jwe)

Translated by

Donata Zuber

Share