Mission Statement

The Husky Rover at the Canadian Space Agency Mars Yard.

The Husky Rover at the Canadian Space Agency Mars Yard.

We envision a future in which robotic systems are pervasive, persistent, and perceptive:

  • pervasive: widely deployed in assistive roles across the spectrum of human activity (robots everywhere!)
  • persistent: able to operate reliably and independently for long durations, on the order of days, weeks, or more, and
  • perceptive: aware of their environment, and capable of acting intelligently when confronted with new situations and experiences.

Towards this end, the STARS Laboratory carries out research at the nexus of sensing, planning, and control, with a focus on the study of fundamental problems related to perception, representation, and understanding of the world. Our goal is to enable robots to carry out their tasks safely in challenging, dynamic environments, for example, in homes and offices, on road networks, underground, underwater, in space, and on remote planetary surfaces. We work to develop power-on-and-go machines that are able to function from day one without a human in the loop.

 

To make long-term autonomy possible, we design probabilistic algorithms that are able to deal with uncertainty, about both the environment and the robot’s own internal state over time. We use tools from estimation theory, learning, and optimization to enable perception for efficient planning and control. Our research relies on the integration of multiple sensors and sensing modalities – we believe that rich sensing is a necessary component for the construction of truly robust and reliable autonomous systems. An important aspect of our research is the extensive experimental validation of our theoretical results, to ensure that our work is useful in the real world. We are committed to robotics as a science, and emphasize open source contributions and reproducible experimentation.

Research Directions

The Laboratory is actively involved in a variety of research projects and collaborative undertakings with our industrial partners. New projects are always in development. Details on several active projects are provided below. If a project interests you, please consider joining us!

  1. PROBE maps visual landmarks into a prediction space.

    Machine Learning for Predictive Noise Modelling
    Many robotic algorithms for tasks such as perception and mapping treat all measurements as being equally informative. This is typically done for reasons of convenience or ignorance – it can be very difficult to model noise sources and sensor degradation, both of which may depend on the environment in which the robot is deployed. In contrast, we are developing a suite of techniques (under the moniker PROBE, for Predictive RObust Estimation) that intelligently weight observations based on a predicted measure of informativeness. We learn a model of information content from a training dataset collected within a prototypical environment and under relevant operating conditions; the learning algorithm can make use of ground truth or rely on expectation maximization. The result is a principled method for taking advantage of all relevant sensor data.
  2. Dense visual estimation pipeline.

    Dense Geometric and Photometric Mapping

    Why employ notoriously fickle feature detectors when you can just use all of the pixels in an image? Traditionally, the answer has been computational cost. However, with more powerful hardware and parallelization, dense visual simultaneous localization and mapping (SLAM) has become feasible to compute in real time. Our group is investigating the use of rich, dense map representations that extend beyond geometry to incorporate lighting and material information. Our goal is to enable map reuse and successful localization under a much wider range of conditions than is possible with traditional visual SLAM.
  3. Autonomous mobility device navigating indoors.

    Low-cost Navigation Systems for Near-Term Assistive Applications
    Simultaneous localization and mapping (SLAM) has been intensively studied by the robotics community for more than 20 years, and yet there are few commercially deployed SLAM systems operating in real-world environments. We are interested in deploying low cost, robust SLAM solutions for near-term consumer and industry applications; there are numerous challenges involved in building reliable systems under severe cost constraints. Our initial focus is on assistive devices for wheelchair navigation, with the goal of dramatically improve the mobility of users with, e.g., spinal cord injuries. There are significant opportunities to positively affect the lives of 1000s of individuals, while at the same time creating advanced robotic technology.
  4. Screen Shot 2016-03-13 at 1.04.41 AM

    Self-calibration between sensors.

    Robot Self-Calibration for Power-On-and-Go Operation
    Multisensor systems offer a variety of compelling benefits, including improved task accuracy and robustness. However, correct data fusion requires precision calibration of the sensors involved, which is typically a time-consuming and difficult. We have designed and are continuing to research intrinsic and extrinsic spatial and temporal self-calibration algorithms for various combinations of sensors (e.g. LIDAR-IMU, LIDAR-camera, camera-IMU, camera-manipulator). The aim of this work is to achieve full automatic calibration of multisensor systems in arbitrary environments, removing the burden of manual calibration and enabling long-term operation.
  5. The Clearpath Ridgeback base with the UR10 arm.

    Collaborative Mobile Manipulation in Dynamic Environments
    Cobots, or collaborative robots, are a class of robots intended to physically interact with humans in a shared workspace. We have recently begun exploring research problems related to various tasks for cobots, including collaborative manipulation, transport, and assembly. We are working to develop high performance, tightly-coupled perception-action loops for these tasks, making use of rich multimodal sensing. Experiments are carried out on an advanced mobile manipulation platform based on a Clearpath Ridgeback omnidirectional mobile base and a Universal Robotics UR10 arm.

     

    Our manipulator is a state-of-art platform, which is unique in Canada at present. Much of our testing takes place in an on-site Vicon motion capture facility, which allows for ground truth evaluation of our algorithms, We have also invested substantial effort in developing methods to self-calibrate these lower-cost platforms automatically, with an eye towards long-term deployment in a wide range of environments.

Sponsors
We graciously acknowledge support from the following organizations: