SBIR-STTR Award

Scene Geometry Aided Automatic Target Recognition (ATR) for Radar
Award last edited on: 9/6/22

Sponsored Program
SBIR
Awarding Agency
DOD : NGA
Total Award Amount
$99,941
Award Phase
1
Solicitation Topic Code
OSD221-001
Principal Investigator
Ryan Richards

Company Information

The Stratagem Group Inc

3855 Lewiston Street Suite 250
Aurora, CO 80011
   (484) 994-9271
   N/A
   www.stratagemgroup.com
Location: Single
Congr. District: 06
County: Arapahoe

Phase I

Contract Number: O221-001-0050
Start Date: 7/18/22    Completed: 4/17/23
Phase I year
2022
Phase I Amount
$99,941
Reducing the false alarm rate (FAR) of Automated Target Recognition (ATR) algorithms is crucial for intelligence, surveillance, reconnaissance (ISR) and precision target engagement missions. There are many contributing factors that result in higher FAR for deep learning (DL) ATR networks operating on Synthetic Aperture Radar (SAR) imagery, including: image distortions, unrepresentative target signatures, and lack of spatial awareness. Common SAR distortions can both obfuscate target signatures and misrepresent clutter which, for convolutional-based networks especially, has been shown to increase FAR. Further, state-of-the-art (SOTA) ATR networks are not spatially aware, i.e. they cannot exploit information from global or local scene geometries to make more informative predictions. To address these issues and thereby reduce the FAR of ATR algorithms, the Stratagem team will develop CHATMAN: Context-aware Hierarchical graph network for improved ATR performance, a noise-robust Scene Geometry Aided (SGA) ATR framework which distills relations between scene geometries and detected targets to reduce false alarms. CHATMAN is composed of 3 major components: (1) an ATR network, (2) a self-supervised learning (SSL) based semantic analyzer, and (3) a spatial reasoning graph neural network (SRGNN). Our pre-existing ATR SOTA networks will be used to provide class-wise predictions and bounding box coordinates for detected targets in the scene. The semantic analyzer encodes semantics of specific scene geometries for which annotations do not exist (e.g., trees, buildings, road networks) into compressed feature vectors. Finally, the SRGNN corrects ambiguous, or lower confidence, predictions made by the ATR network by learning global and local scene relationships derived from the ATR outputs and semantic feature vectors. For this study, we propose using a privately curated SAR dataset from Capella, a commercial SAR data partner. We propose detecting mixed aircraft (e.g., planes, helicopters, retired air vehicles) with this existing labelled dataset. Our goal is to show that CHATMAN can increase our mean Average Precision (mAP) score with this dataset from 0.84 to greater than 0.90 by reducing false alarms. This effort will deliver a design description document for the SSL and SRGNN networks and quantitative results, including the resultant FAR, the precision-recall curve, and mAP score.

Phase II

Contract Number: ----------
Start Date: 00/00/00    Completed: 00/00/00
Phase II year
----
Phase II Amount
----