Work science How to use AI in a human-centred way
A technical tool that takes over routine tasks or helps make difficult decisions – this is the dream! But not for everyone.
For radiologists, counting tiny bubbles on the monitor with meticulous care is part of their daily routine. The bubbles provide information about just what an imaging is showing – whether a tissue is a tumour or not, for example. This is crucial information for the further treatment of patients. The task as such is a routine one and not much of a challenge. Still, it eats up a lot of time. “Tasks such as these are a typical field of application for artificial intelligences (AI),” says Dr. Valentin Langholf. “AI is good at two things: for one, they process routine tasks automatically and, for another, they detect and point out anomalies.” Artificial intelligences are already in use or being developed in many radiology departments. But to reduce their impact to time savings is to miss the point. They result in more changes than that. They affect the tasks performed by humans, their daily work routine, possibly also their professional self-perception and thus their satisfaction in the workplace. All these effects are being explored by the team of researchers at the HUMAINE competence centre, which stands for Human-Centered AI Network, of which Langholf is a member.
Economy, efficiency, acceptance and satisfaction
The project addresses much more than the interface between humans and computers. “We are interested in how the use of AI affects the process of work, i.e. for example the job profile of radiologists, what cooperation looks like in a team, as well as between specialist departments and at the intersection between AI development and AI use,“ explains Professor Uta Wilkens, Chair for Work, Human Resources and Leadership at the RUB Institute of Work Science and head of the competence centre. “Obviously, profitability and efficiency also play a role. This is precisely why we address questions of technology acceptance, job satisfaction and role development. We’re looking for ways to use human intelligence and skills to make technology – which is not free from failure and suffers imperfection – better and more reliable for deployment in operational work processes.”
The less personal experience our interview partners had with the use of AI, the greater their uncertainties.
Uta Wilkens
In order to tackle these questions, the researchers interviewed professionals who are directly concerned: doctors and radiographers. Approximately 130 people took part in a survey on digitalisation conducted by the University Hospital Charité – Universitätsmedizin Berlin. The Bochum-based researchers contributed some questions concerning AI. “It emerged that there are two groups: those who have a rather optimistic attitude and those who are rather pessimistic,” says Valentin Langholf. In response, the researchers took a closer look and conducted individual interviews with five representatives of each of these two groups. “The less personal experience our interview partners had with the use of AI, the greater their uncertainties,” explains Uta Wilkens. “Radiographers and doctors who were fully aware of the capabilities and limitations of AI felt that the technology supported them in their own professional practice. Those who were influenced by media-generated images rather than having hands-on experience with AI showed greater scepticism.”
The media image is unsettling
The media image of artificial intelligence, which is portrayed as a human-like robot that threatens to replace the human worker, has nothing to do with the real forms of application in the workplace. “We are vehemently opposed to this misleading portrayal, because it creates uncertainty in places where it is not necessary. Potential wishful images of a development in technology can’t be simply superimposed on images of the professional world,” points out Uta Wilkens. “AI is software, an algorithm that can only ever perform one functionality in a highly specialised form. This enables highly standardised, usually monotonous areas to be covered where human sensory organs reach their limits. In this case, AI serves as a tool. However, AI is not capable of carrying out the kind of interconnected, multifaceted actions that are performed by human beings who are capable of reasoning.”
The interviews also revealed the plurality of what AI means for different professional groups. While doctors, for example, view the time saved thanks to AI supporting the assessment of images in a positive light, because it allows them to carry out tasks of higher value in the time they thus gained, this gap does not close in the same way for radiographers. For instance, if an AI supports them in positioning patients correctly in an imaging device, a task that they consider part of their professionalism falls away. The close contact with the patient can help to create a professional identity. “If no alternative role models are developed for the work process and the range of activities, then some may wonder: is what I do now merely residual work and am I becoming a gap filler for the technology?” elaborates Uta Wilkens. This is not compatible with the principles of human-centered job design which define the normative basis in work science and protect personality-enhancing job characteristics.
Many have a stake – but do not enter into exchange
At the same time, the researchers certainly rate the potential of AI as high: it can not only save time, but also result in an improvement in quality – something that can develop a great deal of leverage in radiology. After all, this is often where the course is set for the further treatment of patients. “However, to ensure that the AI solution can be a good one, its development and implementation must take place through feedback loops with all parties involved in the processes,” stresses Uta Wilkens. “Without knowledge from the user perspective, errors in the data structures of the AI won’t be detected in time or the AI won’t be geared towards the work process to a sufficient extent.”
During their interviews, the researchers noticed that, on the one hand, there are many people who have a stake in the use of AI and that, on the other hand, they have not yet entered into dialogue in many areas. Take, for example, the clinic’s management and purchasing department. These are the parties that ultimately decide to purchase an AI-supported X-ray technology – but they don’t really relate to the day-to-day work of the people who are supposed to use it. And then there are the data scientists who develop the AI. They, too, are essentially clueless about the application domains to which the data relates, for instance in medical diagnostics. “When purchasing decisions are made with regard to AI, the work processes affected by it are often not reconsidered and developed in a joint effort. This leads to many interface problems and frustration at the workplace. The people involved may wonder why the whole expensive acquisition was even necessary,” explains Valentin Langholf.
A kind of seal of quality
In order for AI to be a real asset in daily workplace routines, the researchers in the HUMAINE project are developing a process guideline for its implementation. It specifies who should be involved and which questions should be answered in advance, if possible. How is the quality of the data on which the artificial intelligence is trained? Is the output actually comprehensible? How does its use affect efficiency and profitability, as well as the quality of the process and the results? How does it affect the workplaces? How do employees feel about their role after the implementation of AI? “What we have in mind is a kind of seal of quality that clinics or industrial companies can acquire, confirming that they are using AI in a human-centred way,” points out Uta Wilkens. “For this to happen, the process mission statement has to be clear and focused, and it has to be immediately plausible to the company representatives.”
At the end of the day, she is convinced that this comprehensive view of the work environment and its possible change through AI will benefit the acceptance of technology. “This is how we can build trust,” she concludes.