Phase II Amount
$2,269,907
Under SBIR Topic N101-100 (Multi-Source Imagery and Geopositional Exploitation [MSIGE]), three Phase I performers developed capability concepts to address different aspects of the MSIGE problem set. In Phase 2, we propose to develop a prototype DCGS-N capability for Multi-INT ISR and Targeting Services (MITS) by developing three subsystems and integrating them under separate Phase II contracts. This approach will increase value to the DCGS-N PoR by providing a low-risk, rapidly transitionable, end-to-end capability. Three proposed subsystems: - STRIKE LINE (Ticom Geomatics) -- Sensor Cueing, Data Publish and Subscribe, Wide Area Network (WAN) Distributor -- MITS System Engineering and Integration Lead - VISION (KAB Labs) -- Presentation Layer, Local Area Network (LAN) Distributor, Video Processing Framework, Video/Multi-INT Indexing/Search - AFOS (Mosaic ATM) -- Geolocalization, FMV Metadata Decoder, Metadata Accuracy Enhancement, Feature Projection into Full Motion Video (FMV) MITS will provide the following high level capabilities for DCGS-N: - Cue imagery sensors with geopositional data to collect FMV on targets of interest - Combine FMV with other target data, provide an integrated display - Improve geopositional accuracy of objects in analyst-selected FMV - Index video repositories for rapid searching, near real-time, and post mission analysis - Distribute enhanced multi-INT data products
Benefit: The research objective for VISION is to produce a Full Motion Video (FMV) technology that will accelerate the Find, Fix, Track, Target, Engage, and Assess (F2T2EA) process beyond the capabilities of today. In order to accomplish faster F2T2EA several enabling technologies are needed. These enabling technologies include, near real-time cross-cueing between sensors and the intelligence data, better situational awareness, and better reporting software. This effort will focus on areas of weakness, which include the ability to display information within FMV and within the Common Operational Pictures (COP) in relation to FMV, the ability to process and distribute video effectively, and address cross-cueing between information sources with FMV. Today, ISR sensors have ever expanding capabilities and information content. SAR, EO, IR and SIGINT capabilities have increased in quantity and quality. Fusion and correlation between these information sources has been hampered by structural separation between systems. There is a degree of functional separation between SIGINT and IMINT that naturally occurs among ISR sources. Some of it is cross-domain, but some is also due to differing expertise and training needs to accomplish the ISR exploitation. Video and imagery are handled by one software suite and SIGINT/EW/IO is handled by another system stack. Caught in the middle of the divide are analysts trying to fulfill a commanders tasking to gain awareness and to protect the own vessel. Decisions and awareness are often time-late, being hampered by manual cross-cueing and tedious manual assessment. A comprehensive ISR picture is not being provided and in the end, proper categorization and threat assessment are not as strong as they should be within a common operational picture. In addition, the fleet is currently experiencing an increase in non-conventional threats and commercial RF sources in todays littoral environments. There is a real problem in rapidly assessing all the information, because post collection analysis is not automated. Fusion needs to be automated, assessments better conveyed, identification and geolocation made more accurate and all this with minimal operator intervention. Detection, acquisition, identification, feature extraction, tracking, cross-cueing, and must occur in a more autonomic manner. This will free analysts to focus on assessment and courses of action in a more time-relevant manner. While this SBIR is in progress, KAB Laboratories realizes that technical paradigms will be changing. Our Phase I research identified the optimal indexing schemes for doing distributed search using new cloud technologies like Hadoop and MapReduce. We also suggested using the same technology that automated brokerage firms have used to rapidly assess the market: real-time Complex Event Processing (CEP) for cloud stream processing. The ability to use the same VISION technology to reach out to forward deployed forces is important as well. KAB has written VISION so it will work well within browsers using OWF and as native applications on tablets and handhelds. We demonstrated this during our Phase I, and will continue this during our Phase II. KAB plans on releasing mobile and browser based versions of its VISION technology for each release. The plan is that there should be no disadvantaged user based on the technology they pick.
Keywords: multi-INT, video, full, IMINT, motion, targeting, Technology, SIGINT