This project will develop interface methods, vision algorithms, and navigation behaviors for UGV supervisory control. Supervisory control reduces the operator workload while providing ease-of-use, predictability and responsiveness in unstructured, dynamic, and unpredictable real-world situations. In our concept, the operator interacts with the UGV via a touch-sensitive display showing video from an on-board navigation camera or a 3rd party camera in overwatch. The operator "clicks" on a single point to tell the UGV to "Go There!", sketches a path/waypoints, designates a lead vehicle, or assigns the UGV to follow video collected from an earlier lead vehicle. Sketched control measures direct the UGV to avoid selected types of terrain. We will develop core system functions for visual navigation, recognizing denied terrain, and visual homing. We will produce a demonstrator system, test data to assess performance and computational burden tradeoffs for alternative formulations, and a preliminary system design for Phase II implementation. Previous R&D demonstrated monocular video destination/waypoint tracking and visual terrain classification. We will investigate enhancements, including hybrid methods combining the best features of alternative algorithms. We will extend the methods to estimate distance, to incorporate data from additional sensors including stereo cameras, and to produce driving and navigation commands.