Augmented Reality Digital maintenance assistant
Poring over blueprints and searching for manuals might soon become a thing of the past. Augmented Reality could facilitate the everyday routine of technicians.
Putting on a viewer rather than dragging around a folder with blueprints and manuals: this is what the future of technicians who are in charge of building maintenance might look like. Using smart glasses, i.e. intelligent goggles, users could navigate to the location and view the necessary tasks step by step – with the aid of Augmented Reality.
“The glasses would superimpose information over the real-world scenario, thus facilitating the user’s task,” outlines Professor Markus König, Head of the Chair of Computing Engineering in Bochum. “He could, for example, view when a component had last been checked and what kind of maintenance work is currently needed. The information would be available then and there, where he needs it, rather than be hidden somewhere in a folder.”
Together with his team, Markus König develops algorithms that enable such Augmented Reality applications. Worldwide, numerous groups are researching into similar questions; the team in Bochum focuses specifically on applications in buildings. “At my department, we primarily study the positioning in the room,” he explains. “In order for the glasses to superimpose information in the correct location, they have to know where they are positioned and what they are viewing.”
Currently, calibration is necessary for this purpose, which is achieved with the aid of at least two dots applied in different points in the room and recorded in a digital model: when the user enters the room, he has to input the information on the dots’ position into the system, in order for it to calculate his three-dimensional position in the room.
Automatic positioning in real time
The researchers from Bochum are developing their algorithms specifically to enable smart glasses to recognise their location in the room automatically and in real time based on the images recorded by the camera – without any active calibration on the user’s part being necessary. Thus, the glasses would be able to identify not only their own position in a room, but also in the entire building. Using the manual dot calibration method, this would only be possible if the user frequently recalibrated the glasses or if he used a large number of dots.
For the purpose of automated calibration, the researchers feed a digital model of the building into the system. An algorithm developed in-house compares the image recorded by the camera with the model. To this end, the person wearing the glasses only has to turn around once in the room, in order to provide the glasses with as much visual information on his surroundings as possible. “Subsequently, the algorithm rotates and moves the digital model until it overlaps with the surroundings,” describes Markus König. If necessary, it does so pixel by pixel by pixel; depth information as is recorded by modern cameras is very helpful in the process.
It’s enough if a few distinctive points are visible.
Markus König
This method of aligning the system in the room works even if the surroundings are altered by furniture and accessories and don’t look anything like in the digital model. “Furniture is not a problem. It’s enough if a few distinctive points are visible, such as the edges of the room or the windows,” says the researcher.
Tested in new building
In summer 2019, Markus König’s team tested the algorithm’s capability of precise positioning in a new building at the Bochum University of Applied Sciences, for which a complete digital 3D model is available. They tested this model and their algorithm with a depth imaging camera that generates images similar to those created by smart glasses. They checked how precisely the algorithm identified the camera’s position in the room and compared the results with traditional positioning using three markers on the wall. Finally, they measured the room with high-precision sensors in order to collate reliable comparative data.
Automated calibration has currently an accuracy of 20 centimetres, which is enough to navigate around a room. For other applications, the Bochum researchers intend to optimise the algorithm. The maximum accuracy limit is two centimetres. “That’s the engineering tolerance,” explains König. “That means if a wall is indicated to be in position X in the digital model, it may in reality deviate from it by two centimetres.” For many applications, this level of accuracy would be quite sufficient, according to the researcher.
Optimisation for lower computational power
One of the group’s main concerns is to ensure that the algorithm works automatically and in real time – as it already does when run on a Smartphone. Smart glasses have lower computational power than phones, however, which means that the application has to become more efficient in order to operate smoothly.
To this end, Markus König’s team will be monitoring the construction of the RUB’s research building Zess, in order to test the algorithms in action on a new object. Minor tests are already routinely conducted in the IC building, where the Chair of Computing Engineering has its offices. The researchers have, for example, programmed an application that guides Hololens wearers automatically to the fire extinguishers in the IC buildings.
The researchers’ other main concern is to be prepared for anything unexpected in buildings. “What might happen is that something is installed in a room that shouldn’t be there according to the blueprint,” says Markus König. The system should be able to detect such things, i.e. identify the object in question based on its position, size and shape. “We don’t have to design the necessary image processing algorithms from scratch,” points out König. “Google already has functional algorithms that are available as open-source solutions and that we can optimise for our applications.”
Integrating step-by-step manual
Once the positioning in the room is working and the smart glasses are capable of identifying unexpected objects, there is only one more thing lacking: the system has to supply the information that technicians require to perform maintenance work – a digital step-by-step manual so to speak. “The glasses might, for example, indicate which screws have to be released first and then specify that a cover has to be removed and which of the components underneath has to be replaced,” elaborates Markus König. In order to do that, the glasses must also recognise which steps had already been completed – another case for intelligent image processing.
Essentially, the system has to have access to photos of the respective component in any relevant state from different perspectives and compare those photos with the component’s actual state. “Manufacturers could in future supply such images directly with the component,” says Markus König. Alternatively, there are already enterprises specialised in procuring such pictures. “There is a start-up that collaborates with a network of 150,000 private individuals who can be dispatched to take photos of specific objects,” explains the researcher. “If, for example, someone needs photos of smoke alarms, people are delegated to photograph smoke alarms – they are paid per submitted photo.” The start-up then sells on the pictures.
Smart glasses are too expensive as yet.
Markus König
Accordingly, the foundations have already been laid for digital assistance using Augmented Reality in building maintenance. “Smart glasses are too expensive as yet to be supplied to each handyman or construction worker,” admits König. But once the applications come within reach and the technology catches on, prices may fall. Then, the algorithm from Bochum would be right there to be implemented in real-world applications.