Personal tools
You are here: Home About Project

About

General information on the FP7-RADHAR project.

Project Summary

The first autopilots in airplanes can be traced back to the beginning of the twentieth century. These devices greatly reduced the pilot’s workload by taking over parts of the navigation. The success of autopilots in reducing navigational complexity and improving safety explains the strong interest to introduce navigational assistance in other transportation means as well.

However, implementing robotic navigation correction on a large scale also represents a potential safety risk for millions of users. For example, some plane crashes have been attributed to the incorrect estimation by pilots of the state of the plane’s automatic pilot, an effect known as mode confusion.

For this reason, if navigation devices are to be correcting the driver’s steering signals, a thorough understanding of driver behaviour is imperative, besides pervasive environment perception. Though driver models have been proposed for vehicles for which the driver is in full control, such driver models for intelligent vehicles are scarce. Furthermore, the linking between environment perception, driver perception and modelling, and robot decision making has often been weak and ad hoc.

RADHAR proposes a framework to seamlessly fuse the inherently uncertain information from both environment perception and the driver’s steering signals by estimating the trajectory the robot should execute, and to adopt this fused information for safe navigation with a level of autonomy adjusted to the user’s capabilities and desires.

More specifically, for environment perception, sensor models will be developed to build 3D models of the environment with estimation and prediction of dynamic obstacles’ motion. To guarantee the safety of the platform and driver, traversability of the terrain will be estimated. For driver perception, user models will be developed to autonomously estimate the navigation task from uncertain steering signals by the driver, in the form of trajectories that the robot should execute. Essential for the robot is its awareness and estimation of human characteristics, such as the human’s control bandwidth or mental models. This requires lifelong, unsupervised but safe learning by the robot. As a consequence, a continuous interaction between two learning systems will emerge, hence RADHAR: Robotic ADaptation to Humans Adapting to Robots. To verify driver model assumptions such as focus-of-attention, the driver’s posture will be continuously estimated with a camera. A haptic interface establishing bilateral communication will further reduce mode confusion.

The framework will be demonstrated on a robotic wheelchair platform that navigates in an everyday environment with everyday objects. Tests on various levels of autonomy can be performed easily and safely on wheelchairs. The platform will be evaluated by a diverse and challenging population of wheelchair users who currently drive unsafely.

RADHAR Logo StaticDuration

1 August 2010 - 31 July 2013

Funding

European Commission (FP7-ICT-248873)