In this proposal we detail the COEUS framework, a novel software system, which advances the state of the art in human-machine teaming by enabling these teams to co-train on tasks before performance in order to establish, maintain, and repair trust during performance. In order to address increasing threats from adversaries, warfighters must be partnered with intelligent systems capable of automating and augmenting human capabilities. Traditional models of these human-machine teams have autonomous partners and human partners train independently over the skills needed to accomplish a task or mission. COEUS will enable these teams to instead co-train and develop the skills needed to accomplish a task or mission together, over increasingly complex training scenarios. Co-training will allow bi-directional trust between the human and machine teammates, by leveraging new algorithms and technology related to our core technical objectives. The COEUS framework will enable human and machine teams to effectively co-train and acquire skills for performance success together, while leveraging the co-training exercise as a mechanism to establish, measure, maintain, and repair trust. It will instrument human machine teams with an explainable architecture using ontology logs, hierarchical decision graphs, and interdependence analysis tables to improve the observability, predictability, and directability of machine teammates to their human counterparts. The system will build on our prior work which demonstrates the feasibility of Phase 2. In Phase 2 we will build an end-to-end demonstration of the impact of COEUS and co-training in exercises with humans using the AFSIM capabilities provided by Infinity Labs to assess trust and performance of traditional and co-trained teams on tasks related to aircraft encounters in ATC scenarios, and aircraft identification in Air Base Air Defense scenarios. COEUS will enable human-machine teams to train in novel ways for tasks, allowing the machine counterpart to use interdependence relationships on tasks to establish, maintain, and repair trust; as we argue that these interdependence relationships are the core way in which humans naturally assess the trustworthiness of technology and decide how and when to use it. We propose that these interdependence relationships are the key way that trust is established, maintained, and repaired amongst human performers, and by providing a cognitive model of trust in this context, we can enable more effective machine team counterparts, and exercises which establish appropriate levels of trust in the human performer.