Mosaic ATM proposes an innovative approach to providing information about automated system trustworthiness in a given context, which will support humans in appropriately calibrating trust in such systems. Appropriately calibrated trust will, in turn, inform the scope of autonomy humans grant to the system to perform independent decision making and task execution. We communicate system trustworthiness through a combination of an innovative approach to explainable machine learning (ML) and representation of confidence in model results based on a quantification of uncertainty in those results given the available input data. We demonstrate our approach in the context of automated support for monitoring and managing crew wellbeing and performance in deep space exploration missions, where astronauts will be subject to the physical and psychological stress of performing in an isolated, confined, and extreme (ICE) environment. In Phase I, we will demonstrate our approach to support human assessment of automated system trustworthiness through a generalized method for explainable ML and representation of uncertainty in ML model results and situate them in a prototype system that will support evaluation of their effect on human calibration of trust. This prototype system will be based on our concept for automated support for monitoring and managing crew wellbeing and performance, which we will document in Phase I. Potential NASA Applications (Limit 1500 characters, approximately 150 words): Several NASA applications will benefit from a generalizable approach to supporting appropriate calibration of trust in automated systems built upon ML models, including deep space exploration, air traffic management, and aviation safety. For example, NASA will benefit from an automated system to support monitoring and management of crew wellbeing and performance in deep space exploration. Potential Non-NASA Applications (Limit 1500 characters, approximately 150 words): Organizations performing in isolated, confined, or extreme (ICE) environments can use an automated system to support crew wellbeing, e.g., National Science Foundation, Department of Defense. Explainable computer vision can enhance automated labeling of drone equipment inspection images, flagging assets with visible defects, drawing inspector attention to the most relevant elements for analysis. Duration: 13