SBIR-STTR Award

Machine Learning Explainability and Uncertainty Quantification to Support Calibration of Trust in Automated Systems
Award last edited on: 10/29/2024

Sponsored Program
STTR
Awarding Agency
NASA : LaRC
Total Award Amount
$924,577
Award Phase
2
Solicitation Topic Code
T10.05
Principal Investigator
Alicia Fernandes

Company Information

Mosaic ATM Inc

540 Fort Evans Road Ne Suite 300
Leesburg, VA 20175
   (800) 405-8576
   info@mosaicatm.com
   www.mosaicatm.com

Research Institution

Universities Space Research Association

Phase I

Contract Number: 80NSSC21C0264
Start Date: 5/7/2021    Completed: 6/19/2022
Phase I year
2021
Phase I Amount
$124,864
Mosaic ATM proposes an innovative approach to providing information about automated system trustworthiness in a given context, which will support humans in appropriately calibrating trust in such systems. Appropriately calibrated trust will, in turn, inform the scope of autonomy humans grant to the system to perform independent decision making and task execution. We communicate system trustworthiness through a combination of an innovative approach to explainable machine learning (ML) and representation of confidence in model results based on a quantification of uncertainty in those results given the available input data. We demonstrate our approach in the context of automated support for monitoring and managing crew wellbeing and performance in deep space exploration missions, where astronauts will be subject to the physical and psychological stress of performing in an isolated, confined, and extreme (ICE) environment. In Phase I, we will demonstrate our approach to support human assessment of automated system trustworthiness through a generalized method for explainable ML and representation of uncertainty in ML model results and situate them in a prototype system that will support evaluation of their effect on human calibration of trust. This prototype system will be based on our concept for automated support for monitoring and managing crew wellbeing and performance, which we will document in Phase I. Potential NASA Applications (Limit 1500 characters, approximately 150 words): Several NASA applications will benefit from a generalizable approach to supporting appropriate calibration of trust in automated systems built upon ML models, including deep space exploration, air traffic management, and aviation safety. For example, NASA will benefit from an automated system to support monitoring and management of crew wellbeing and performance in deep space exploration. Potential Non-NASA Applications (Limit 1500 characters, approximately 150 words): Organizations performing in isolated, confined, or extreme (ICE) environments can use an automated system to support crew wellbeing, e.g., National Science Foundation, Department of Defense. Explainable computer vision can enhance automated labeling of drone equipment inspection images, flagging assets with visible defects, drawing inspector attention to the most relevant elements for analysis. Duration: 13

Phase II

Contract Number: 80NSSC23CA003
Start Date: 12/13/2022    Completed: 12/12/2024
Phase II year
2023
Phase II Amount
$799,713
The Explanations in Lunar Surface Exploration (ELSE) capability applies Mosaic ATM's Explainable Basis Vectors (EBV) method for explainable machine learning (xML) and likelihood scores approach to uncertainty quantification (UQ) to lunar surface exploration. In Phase I, Mosaic ATM demonstrated the ability to generalize our EBV method from discrete numerical or binary inputs (e.g., wind speed or the presence/absence of rain) to computer vision classification problems. We demonstrated the feasibility of extracting various types of information from within a deep learning model to inform qualitative and quantitative judgments of whether the machine learning (ML) model is trustworthy. Such judgments can help human and automated system users of ML model outputs decide when to trust/distrust, the system's recommendations. Such an approach to appropriately calibrate trust in automated systems is crucial to expanding their use in high risk environments like deep space exploration. In Phase II, we propose to apply the EBV method to classification of lunar terrain features to support trusted autonomy in lunar exploration, to include:Use the EBV method to produce information from within the underlying ML model to support assessment of the veracity of lunar terrain judgment model results. Incorporate EBV explanations and uncertainty quantification (UQ) into a lunar rover analog to demonstrate the ability to inform an automated system of the trustworthiness of the model. Incorporate EBV explanations into a user interface (UI) to demonstrate the ability to support appropriate calibration of human trust in an automated system. Evaluate the ELSE concept and prototype in an analog environment. We have assembled a multi-disciplinary team, partnering with the Universities Space Research Association (USRA) as a research institution and the University of Central Florida (UCF), bringing together experts in lunar exploration, ML, and human-automation interaction. Anticipated

Benefits:
Info ELSE will apply our xML and UQ methods to contribute to: Successful implementation of autonomous systems to support deep space exploration, in line with efforts within Exploration Systems Development Mission Directorate (ESDMD) like Moon to Mars. Human-rover teaming in tasks involving path planning and navigation. Advances in these areas also will contribute to progress more generally in assured autonomy research, which is of interest across NASA Directorates. Non-NASA applications include robotics systems operating remotely, where increasingly autonomous operations can reduce the need for teleoperation, such as: Underground mines Radiation-contaminated sites Search and rescue in dangerous areas