Manfred Wannöffel has been working closely with various companies and their works councils for many years.

© RUB, Kramer

In conversation 

To a Successful Partnership with Co-worker AI

Will AI take our jobs? Not necessarily. It depends on how we shape its use in terms of labor policy, says Manfred Wannöffel. 

Is AI about to make me redundant? This is not an uncommon concern, as the uses of AI are expanding rapidly and anyone not using it could lose out to the competition. How can AI be employed in companies and administrative institutions such that all of its benefits can be enjoyed without compromising human employees? How can AI systems be designed in such a way to foster acceptance and trust in the new technology?

Researchers at humAIne, the transfer hub for human-centered work with AI at Ruhr University Bochum, are working to answer these questions. Professor Manfred Wannöffel, former executive director of the Joint RUB/IG Metall Working Group and senior professor at Ruhr University Bochum, has dedicated himself and his team to the introduction of AI in companies as part of a sub-project with management and works councils of numerous regional enterprises. 

Professor Wannöffel, why should companies’ use of AI be regulated? 
Artificial intelligence has become a key topic in the current debate around the future of work. The critical difference between AI and previous digitalization processes lies in the manner in which the AI system’s behavior is produced. In past processes, this was determined by explicit programming: Developers write rules that the system works through step by step. The result was relatively comprehensible.

Autonomy and the ability to make independent decisions are big challenges in the context of work.

AI systems, on the other hand, are not dictated by programmed rules of behavior, but rather the system itself decides its behavior on the basis of huge quantities of data. The system identifies patterns and can independently derive its own options. Generative AI systems like ChatGPT can take the same input and reach different results that can only be partially reproduced. However, we should not forget that these processes are always based on specific objectives and interests with a corresponding selection of data, which are in turn always determined by humans.

For employees and their representatives, these characteristics of AI such as autonomy and the ability to make independent decisions are big challenges in the context of work. If companies use AI systems in a largely unregulated manner, this can lead to job losses, greater insecurity, and stress for employees.

How can such regulation be implemented? 
You have to take the employees’ perspective into consideration. This is done through their representatives. Works councils and employee committees are established institutions that include employee perspectives in companies’ decision-making processes and have legal voting rights when it comes to introducing new technology: In every company with more than five employees, it is possible to establish a works council. Public administrative bodies have their own laws for employee representation. These committees have to be involved whenever new technology shows the potential of controlling employee performance.

The EU AI Act is the first comprehensive AI law in the world.

These special Germany-wide regulations have been supplemented since August, 2024 by the European Union’s AI Act. This stipulates mandatory workforce qualifications before AI is utilized. Above all, it regulates that when AI is being used, the last decision in a work process must always be made by a human employee. This is entirely unique from a global perspective, especially when looking at China or the USA. The EU AI Act is the first comprehensive AI law in the world.

Is there no fear that such directives will leave companies at a global competitive disadvantage?
This is indeed the major challenge in a world undergoing profound change, also known as a polycrisis. Ultimately, the democratic achievements of co-determination in the workplace always face the bottleneck of economic viability and global competitiveness. For 30 years now, we have observed that many companies view collective bargaining agreements and works councils as restrictions on their options and attempt to free themselves from them.

This democratic perspective is often wrongly considered left-wing.

But research shows that qualified works councils are an asset to a company. It’s not a fundamental conflict between management and the works council, but rather a so called “conflict partnership” (Walther Müller-Jentsch) that balances existing differences in labor policy interests. This democratic perspective is often wrongly considered left-wing. And democracies worldwide are under pressure. This makes it all the more important to practice democracy in the workplace every day, and also to strengthen it through successes. In doing so, we are also implementing a core idea of the recently deceased Jürgen Habermas with regard to our labor democracy: Within the framework of the rule of law, concrete solutions in all areas of society must be continually negotiated among the stakeholders. Therefore, in addition to the technical and economic aspects, the introduction of AI in the workplace is always also a matter of labor policy negotiation.

Our empirical studies clearly show that works councils do not want to prevent new technology at all, but rather to shape them together with management in a way that respects the people working at the company.

What could this look like? 
In Bochum, we collaborated with Doncasters, a foundry that is currently very successful in producing turbine blades for gas-fired power plants and aircraft, as part of the humAIne project. The project focused on the use of AI in quality control. The works council itself recognized the potential benefits of this approach and even pushed for its implementation with management. A framework agreement stipulated the conditions under which this AI would be used. For example, affected employees are required to undergo training before the AI is implemented. The agreement ensures that no employee loses their job if their position is replaced by AI. Should tasks become obsolete, employees are offered alternative positions within the company.

How long does it take for such an agreement to be put in place? 
It can take a while, for example, six months to a year from the start of discussions on the topic. It is a democratic, labor-policy negotiation process. We also tested the transferability of the Doncasters company agreement, which is scheduled to be implemented at the end of 2026, with a long-established Bochum-based company, the Eickhoff machine factory and iron foundry. The impetus was the introduction of AI solutions in the office area. Eickhoff’s works council, with its framework company agreement for all departments, has even surpassed the Doncasters model. This also shows us that we are in a continuous development process.

What happens once the agreement is in place? Is it regularly updated?
Agreements have to prove their effectiveness in everyday conditions. In a best-practice example from Deutsche Telekom, a committee with equal representation from management and the works council reviews new AI applications every four weeks to ensure they comply with the regulations of both the Works Constitution Act and the EU AI Act. 

Ultimately, we are all working to make the democratic and human-centered use of AI a marketable quality criterion in Germany and the EU. 

However, this continuous work requires a technically and methodically trained works council as well as respect from management for the institution of participation. Democracy in the workplace should become commonplace because it allows the introduction of AI with little to no conflict, to the benefit of everyone involved, from management to employees.

Can companies or works councils consult humAIne if they want to follow this example?
Yes, in addition to scientific articles we have also drafted a commented humAIne company agreement template that explains the individual labor-policy process steps. Together with TÜV Rheinland, a process quality seal is also being developed as part of humAIne that confirms that a company uses human-centered AI. This is based on the checklists of the ISO standards, although these primarily focus on external impact.

However, to quote Max Weber, politics or democracy is the drilling of thick boards. It is slow, while AI is exceedingly fast. This is the greatest challenge in the competition with China and the USA. In order to consolidate these approaches, the humAIne e.V. association was founded to continue the humAIne project and serve as a contact for companies and interest groups.

humAIne

The competence center humAIne is funded by the Federal Minister of Research, Technology, and Space from 2021 to autumn of 2026. The humAIne e.V. association was founded in 2024 with research institutes at Ruhr University Bochum as well as companies. This association will continue the project’s work after the funding period has ended.
 

Ruhr Innovation Lab

The research conducted for humAIne follows the concept of the Ruhr Innovation Lab, which aims to contribute to a resilient and sustainable society. Ruhr University Bochum and TU Dortmund are currently joint applicants in the Excellence Strategy under the title Ruhr Innovation Lab.

Published

Wednesday
01 April 2026
11:59 am

By

Meike Drießen

Translated by

Allround Fremdsprachen GmbH von der Lühe

Share