Phase II year
2022
(last award dollars: 1695042831)
When human teammates have not properly calibrated trust towards the capabilities of their machine partner, they can exhibit all-or-nothing behavior. With too much trust, human teammates neglect to review their machine teammates work, and with too little trust human teammates ignore machine suggestions and feedback. Machine teammates also need to measure the human teammates trust levels to help the human to delineate task responsibilities, maintain awareness of the machine teammates capabilities, and maintain a competent sight picture of the operational space. Maintaining trust between human-machine teams is challenging when risk is high or perceived competence of a teammate changes, leading to team misalignment over time. The ability to establish, maintain, and repair trust is essential to maintain long-term teaming efficacy. To promote strong intra-team collaboration it is necessary to (1) set and maintain human trust in the intent of machine partners; (2) establish and reinforce the machines trust in the humans assessment of competence; and (3) drive interventions to repair and re-align trust after acute changes in the teams perceived competence. In response to this problem, Aptima will deliver Trust Resilience in User-System Team Modeling (TRUSTM), a system that models a humans trust in a machine teammate, assesses the machines actual competence, and adjusts the machines behavior to calibrate the humans trust. Aptima with their partners at Carnegie Mellon University (led by Dr. Cleotilde Gonzalez) have selected a task, AI teammate, and approach to model co-training with dynamic trust adjustment and develop a system for maintaining and repairing trust in human-machine teams. The task will be based on intelligence analyst use cases. The teammate will be a AI cognitive assistant called ALFRED developed by Aptima for Army analysts that recommends information for an analyst to review based on priority information requests (PIRs). The human teammate saves items that support their conclusions to reports associated with each PIR, and reject items and keywords that are irrelevant. The modeling approach for TRUSTM will include an instance-based learning theory (IBLT) model of human trust in ALFRED based on models of trust developed by Dr. Gonzalez and colleagues. TRUSTM will track the users behavior and feedback, determine the discrepancy between the humans apparent assessment and TRUSTMs assessment of ALFREDs competence, and predict experiences (e.g. behaviors in ALFRED) to calibrate the humans trust in ALFRED to the correct range. TRUSTM will dynamically adjust trust by changing ALFREDs behavior to maintain the optimal trust levels and repair trust when over- or under-trust occurs.