SBIR-STTR Award

Few-shot Object detection via Reinforcement Control of Image Simulation (FORCIS)
Award last edited on: 8/18/2024

Sponsored Program
SBIR
Awarding Agency
DOD : OSD
Total Award Amount
$3,010,734
Award Phase
2
Solicitation Topic Code
SCO182-006
Principal Investigator
Steven Clark

Company Information

Expedition Technology Inc

13865 Sunrise Valley Drive Suite 350
Herndon, VA 20171
   (571) 212-5887
   info@exptechinc.com
   www.exptechinc.com
Location: Single
Congr. District: 10
County: Loudoun

Phase I

Contract Number: HQ003419P0026
Start Date: 12/14/2018    Completed: 6/13/2019
Phase I year
2019
Phase I Amount
$224,887
Few-shot Object detection via Reinforcement Control of Image Simulation (FORCIS) will combine deep reinforcement learning with additional training data augmentations and strategies to develop robust few shot detectors leveraging available simulations.

Phase II

Contract Number: HQ003420C0028
Start Date: 5/13/2020    Completed: 5/12/2022
Phase II year
2020
(last award dollars: 2023)
Phase II Amount
$2,785,847

Project FORCIS investigates solutions to the challenging problem of few-shot object detection, in which there are insufficient observations of a high-priority target to train an automatic target recognition (ATR) algorithm using standard deep-learning approaches. FORCIS specifically investigates the usage of synthetic training data generated by simulation engines which are deliberately and dynamically parameterized to maximize utility to a downstream deep-learning algorithm. Our approach on FORCIS is to first develop a flexible simulation environment based upon a modern game-development engine. An exposed interface to this simulator allows an RL agent to control the probability distributions of a wide range of simulation parameters. The RL agent (itself a neural network) is trained to explore this high-dimensional parameter-space in order to maximize the performance of a mission-specific computer-vision “Main Task Model”. Phase II of FORCIS seeks to build upon the successful results from Phase I, broadening ---------- This Phase 2 Enhancement seeks to improve the quality of synthetic data to be representative of real data (using whatever criteria the algorithm deems important for detect, classify, and track of an object of interest at rest and/or in motion) and to maximize algorithm performance such that when the algorithm is introduced to real data with a real object of interest in its real environment, the algorithm is able to perform equally as well as it did with synthetic data. Optionally, the additional real-world data collection will be performed in diverse representative conditions to assist in training of background imagery.