SBIR-STTR Award

Incremental Learning for Robot Sensing and Control
Award last edited on: 2/15/2012

Sponsored Program
STTR
Awarding Agency
DOD : Army
Total Award Amount
$507,770
Award Phase
2
Solicitation Topic Code
A09A-T030
Principal Investigator
Urs Muller

Company Information

Net-Scale Technologies Inc

281 State Highway 79
Morganville, NJ 07751
   (732) 970-1441
   info@net-scale.com
   www.net-scale.com

Research Institution

----------

Phase I

Contract Number: ----------
Start Date: ----    Completed: ----
Phase I year
2010
Phase I Amount
$100,000
This proposal addresses key open challenges identified during the LAGR program for the practical use of adaptive, vision-based robot navigation in commercial settings. First, the adaptive vision system learns quickly, but forgets as quickly. This will be addressed by using an ensemble of "expert" classifiers, each of which specializes for a particular environment and can be quickly activated when the environment matches its domain of validity. Second, a new type of cost map will be used which accumulates high-level feature vectors, rather than traversability values. A global cost map will also be integrated. Third, we will pre-train the convolutional net feature extractor using the latest unsupervised algorithms for learning hierarchies of invariant features. Fourth, the limited power of general-purpose CPUs will be lifted by using a highly compact, dedicated FPGA-based hardware platform to run computationally intensive parts of the system. Implementations on commercially available GPUs will also be explored. Finally, to achieve portability and modularity, we will make our implementation independent of a particular robot platform and support a wide range of sensor types including stereo cameras and LIDAR. The result will be a highly-compact, low-power, self-contained, low-cost, vision-based navigation system for autonomous mobile robots.

Keywords:
Autonomous Robot Navigation, Robot Control, Machine Learning, Real-Time Learning, Convolutional Neural Networks, Fpga, Multi-Expert Approach

Phase II

Contract Number: ----------
Start Date: ----    Completed: ----
Phase II year
2013
Phase II Amount
$407,770
The purpose of this proposal is to build a working prototype of a highly-adaptive, vehicle-independent, compact, low-power, low-cost, autonomous ground robot navigation system that incorporates the results obtained in our Phase I effort and in our earlier DARPA LAGR (Learning Applied to Ground Robots) work. The system will be able to quickly and automatically adapt to changing environments in real time. Near-to-far learning techniques provide sensing far beyond stereo and LIDAR range, and deep learning techniques allow terrain classification, people detection, and the ability to automatically learn from the robot's own experience and from observations of human drivers (in semi-autonomous mode). We will show the prototype's readiness for commercial use by demonstrating its capabilities on at least two different vehicle platforms in realistic outdoor settings using military-relevant use cases. The system will be independent of any particular robot platform and will be capable of operating both self-sufficiently, relying only on its built-in sensors, or in an integrated unit with existing on-board sensors. Our system is designed to fully operate with passive vision-based sensors alone but its performance can be enhanced with additional sensor input, if available, such as LIDAR.