dtu-r3.github.io

DTU-R3 home page and documentation

View on GitHub

DTU-R3: Remote Reality Robot

Introduction

We focus on affordable mobile robots that can navigate semi-autonomously indoors, sometimes outdoors, and that are able to carry at least a webcam or a 360° camera, among other sensors.

A significant part is the simulation of those robots as well as interactions in 3D.

To ensure reproducibility and to widen the possible use-cases, we implement the same approaches on several robots, ranging from some off-the-shelf toys to developer-oriented robotic platforms, and to retrofitted wheel-chairs.

Image of 4 robots

Publications

Vision

Faciliate multi-disciplinary research & development around mobile robots, and provide a shared platform for people not necessarility skilled in robotics to experiment with innovative concepts in the direction of “robots as a service”.

Intention

The project originate from the DTU “Technical University of Denmark”, Department of Management Engineering, and its purpose is to promote collaboration between the different groups interested in the various facets of working with such mobile robots, also from other institutes, universities, and countries.

Indeed, when starting this initiative in September 2017, we saw that there was room for improvement to better work together, build upon each-other achievements, and thus pushing forward the quality of student and research projects, instead of too often restarting from scratch.

This is done by lowering the bar to get started with mobile robots, and providing more features out-of-the-box, for student project, research projects, or even prototypes towards products. We are also very much price-concious, as this is a key to adoption.

Thanks to a lot of existing, open-source work out there, we mainly pick, test, adapt, and document available components. In particular, we build upon ROS “Robot Operating System”, and its large ecosystem. Please see the credits in the various components that we are reusing.

For similar reasons, we have picked the Unity 3D engine for our simulations and interactions needs, since it is the most widely used 3D engine worldwide, and is therefore well-known by most people working with 3D, including students not familiar with robotics.

Finally, since robotic projects are complex, a focus is set on standardisation, modularisation, documentation, and other good practices.

Use-cases

Software architecture

We package the different software components as Docker containers to make then easy and robust to deploy, test, and interoperate. This ensures in particular a good reproducibility and separation of concerns, which is especially good for projects that do not have an active core team to maintain everything all the time. This is for instance the case for series of student projects. It is also useful for running the same software packages during simulation than the ones running on the robots.

Robot sofware components

Robot models

Positioning

Positioning is an essential part of robot scenarios. We work with a few different technologies for robot positioning, such as wheel encoders, Fiducials 2D markers, ultrasound with Games on Track, and based on stereoscopic vision.

Image of Games on Track and Fiducial

Navigation requires a positioning technology and is then able to make the robot move to other coordinates or points of interest. We mainly use GPS coordinates to define the way-points to navigate to.

Image of ROS RVIZ Fiducials

Interaction

Our robots also have a possibility to be manually controlled, e.g. via joystick or keyboard, and either attached to the robot, or from a remote computer.

Scenarios

For simple programming of robot scenarios as well as interaction with other IoT systems, we for instance have a compatibility with the visual programming tool Node-RED.

Image of ROS communication from Node-RED

360° views

Our mobile robotic platforms can be equiped with different models of 360° cameras, which are sending either photos or a realtime video-stream (e.g. back to a human operator using a virtual-reality helmet).

Here is an example on top of a modified Patbot platform:

360° camera on top of Padbot

Example of remote control with 360° VR on top of the Arlobot platform:

Arlobot 360° VR

3D features

Our 3D interfaces and simulations made in Unity are meant to run on a (desktop) computer, not on the robots, but they are able to receive information and control our physical as well as virtual (simulated) robots.

3D campus

Our Unity software is able to import building data from MazeMap, as well as way-point routes for way-finding (e.g. move from one office to another).

Image of DTU Lyngby campus in Unity

Way-point navigation

The Unity interface can be used to visually build some way-point routes.

Image of URDF robot representation and path

Unity also keeps and share with the robots the position of the Fiducials:

Image of Fiducials in Unity

3D vision and sensors

A common type of LIDAR can be used for e.g. collision avoidance:

Image of LIDAR in ROS

Stereo-vision is also one of the possible way to position the robot (SLAM), for instance with the ZED Mini sensor.

Image of ZED in Unity

Example with Intel RealSense:

Image of Intel RealSense

Simulated sensors

Our Unity software can also be used for virtual (simulated) robots, in particular with a simulation of the sensors (such as the proximity sensors).

Image of ultrasound ping sensors in Unity

When running simulations, Unity only implements the physical layer of the robots, while most of the higher level logic is done by running the same (ROS) software packages than on the physical robots.

Team