Microsoft Mixed Reality & AI Lab: spatial computing

Microsoft Mixed Reality & AI Lab: spatial computing

March 9, 2022 Senza categoria 0
spatial computing

New technologies are making great strides: in a short time there has been an exponential development in the field of digital innovation.

Virtual Reality, Augmented reality and artificial intelligence are among those in greater evolution, but not everyone knows that among these there is also spatial computing.

What is spatial computing?

The term spatial computing essentially refers to the ability of computers, robots and other electronic devices to be "aware" of their surroundings and to create digital representations of it.

Cutting-edge technologies, such as Mixed Reality (MR), can significantly improve performance, enabling the creation of sophisticated sensing and mapping systems.

Human-robot interaction

Recently, researchers at Microsoft Mixed Reality & AI Lab and ETH Zurich developed and tested a new framework that combines MR and robotics to improve spatial computing applications.

As the researchers report:

"The combination of spatial computing and egocentric sensing on mixed reality devices allows them to capture and understand human actions and translate them into actions with spatial significance, which offers exciting new possibilities for collaboration between humans and robots."

"This paper presents several human-robot systems that use these capabilities to enable new use cases such as mission planning for inspection, gesture-based control, and immersive teleoperation."

Where the framework will work

The RM and robotics-based framework of the Microsoft Mixed Reality & AI Lab team of researchers has been implemented on three different systems with different functions: all these systems require the use of a HoloLens MR headset.

The three systems of Microsoft spatial computing

System number 1

The first system is designed to plan robotic missions that involve inspecting a given environment.

In essence, a human user moves around the environment they want to inspect wearing a HoloLens headset, placing holograms in the shape of waypoints that define the trajectory of a robot.

In addition, it will be possible for the user to highlight specific areas intended for the collection of images or data.

Subsequently, the extrapolated information will be processed and translated, so that it can be used to guide the movements and actions of a robot while inspecting the environment.

System number 2

The second system proposed by the researchers is an interface that allows human users to interact with the robot more effectively, for example by controlling its movements through simple hand gestures.

In this case, the colocalization of different devices – that is, their overlapping – including mixed reality headphones and smartphones is envisaged.

"The colocalization of devices requires that each of them be able to locate itself in a common reference coordinate system," the researchers wrote.

"Through their individual poses against this common coordinate frame, it is possible to calculate the relative transformation between localized devices and subsequently use it to enable new behaviors and collaboration between devices."

To colocate devices, the team introduced a framework that ensures that all devices in their systems share their respective locations relative to each other and a common reference map.

System number 3

Finally, the third system allows for immersive teleoperation, which means that a user can remotely control a robot while observing its surroundings.

This system could be especially valuable in cases where a robot will be needed to navigate in an environment inaccessible to humans.

"We explore the projection of a user's actions onto a remote robot and the robot's sense of space onto the user," the researchers explained.

"We consider different levels of immersion, based on touching and manipulating the functions of the robot until it is controlled at a higher immersion level that allows it to become the robot and map the user's movement directly on the robot."

In initial tests, all three systems achieved very promising results.

In the future, they could be introduced in many different contexts: bringing a human-robot collaboration in unison to efficiently solve a wider range of complex real-world problems.

Leave a Reply

Your email address will not be published. Required fields are marked *