Current video feeds from airborne sensors such as Predator, Argus-IS, Argus-IR, Gorgon Stare and others excel at providing high resolution imagery from a birds-eye vantage point. While those pixels offer the analyst the eye-in-the-sky, today's systems rely on the analyst to interpret the scene without the benefit of additional context that can be obtained from readily available data types, such as terrain and elevation information, roadways, points of interest, Controlled Airspace Symbology, Restricted Airspace Boundaries, FAA Centers, radar, and LIDAR data. Our goal under this proposed effort is to augment the user experience by adding geo-registered layers from these information sources. In Phase I of this project, ObjectVideo demonstrated prototypes of all the necessary components and core technologies of a viable real-time overlay system. For Phase II, ObjectVideo will focus on refining and extending the core technologies and integrating these components into an end-to-end overlay system. OV will also implement novel features including utilization of external reference data for improved overlay accuracy, optional rendering of occluded overlay elements, and an expanded context view providing greater situational awareness of regions surrounding the UAVs field-of-view. The system will provide users with critical single-click, single-glance information on any client capable of displaying video.
Benefit: Unmanned Aerial Vehicles (UAVs) are a critical component of Intelligence, Surveillance and Reconnaissance (ISR) capabilities for all branches of the armed forces, but the limited viewing angle and resolution of typical UAV video can limit the ability of users on the ground to act. The proposed system would increase the situational awareness of users by providing important contextual and targeted information, overlaid onto the video in a clear and accurate manner, in real-time. The enabling technologies proposed have the following benefits, within this project and in future efforts: Performance and Scalability: The system takes full advantage of COTS GPU hardware to render multiple overlay streams in real-time. System architecture allows scalability in several dimensions. Accuracy: advanced computer vision based techniques correct for common errors in sensor metadata and yield far more accurate overlay results than naïve approach. Low-Cost: COTS PC and GPU hardware Flexibility: The system can be configured as a scalable client-server architecture, or deployed on a single laptop for stand-alone use. The standards-based client-server architecture allows interoperability with a wide variety of client platforms, including mobile devices. Extensible via SDK: Overlay and VDMS SDKs allow developers to add data types, custom overlays, or incorporate the system into new applications Compatibility: Standards-based approach allows interoperability with a wide variety of data and applications. Intuitive User Interface: Easily leveraged by novice users. The completed system will provide an inexpensive, high-performance overlay system which provides valuable situational awareness to consumers of UAV video feeds. Potential commercial and military applications include: Augmented Reality Airborne Video Exploitation Soldier Helmet-Mounted, Smart-Phone and Vehicle-Mounted cameras
Keywords: Real-time video overlay, GPU, UAV, GIS overlay, metadata correction, MPEG-4, ISR