The goal of the Calypso effort is to provide the Navy with potential methods and approaches for testing AI/ML models and for developing a certification capability prior to deployment of models across USG networks. This effort will provide the Navy with a critical capability across AI/ML development and deployment the ability to achieve operational explainability (does the model do what it says it does and how does it do it?) and robustness (how resilient is the model to adversarial attack and performance errors?).
Benefit: Building off of Calypsos existing R&D and product development efforts, our research will support the design, development and deployment of a product which enables customers to rapidly and transparently validate that an AI/ML system preforms as intended and introduces minimal vulnerabilities into customer environments. There are countless commercialization opportunities for a solution of this nature, to include the establishment of AI security standards, certification requirements for USG and regulated industry customers, and ultimately will provide organizations with massively increased insight into what they are actually introducing into their digital networks when they procure and deploy AI systems.
Keywords: adversarial AI/ML, adversarial AI/ML, Vulnerability Assessment, Certification,, Artificial Intelligence, software, Machine Learning, cyber