We propose replacing the current legacy planning and decision aid systems with a reasoning engine that uses a variety of data types to identify potential Courses of Action (COAs) and predict multi-agent behaviors (e.g. of enemy targets) within a Common Operational Picture (COP). The reasoning engine software is deployed using DevSecOps best practices (Kubernetes/Istio compliant with the USAF Platform One). As a containerized microservice, it supports both visualization and interactive human-machine teaming inside of existing ESRI platforms and tools. Legacy planning and decision aid systems provide users with a mass of digital data that must be manipulated manually to evaluate the predicted outcomes of COAs and build Situational Awareness (SA). Our approach automates and improves upon this process, building on two key technologies to build SA: (1) semantic graphs and (2) COA generation from multi-agent behaviors. COAs will be encoded in our reasoning engine as semantic graphs, based on manuals, training materials, and domain knowledge. As the reasoning engine receives new data it will begin to map out potential behaviors for multiple agents within the COP. These behaviors will then be integrated into the semantic graphs to identify and predict likely enemy COAs and the least risky response COAs available to domestic agents, thus promoting SA. COAs will display in a COP visualization environment that will be fully customizable based on user preferences and the particular situation, including the option to refine displayed COAs based on user knowledge. Third Insight uses “neuro-symbolic” approaches to identify and track multiple agents, reason about their behaviors and intended targets, and build COAs that incorporate knowledge from SMEs. Semantic Probabilistic Graphical Models (PGMs) that combine multiple types of contextual evidence -- including SME-based assessments, different types of sensor data, different DNN results, etc. -- from the knowledge graph to estimate uncertain or unobserved variables derived from the battlespace, sensor networks, or user-driven COP. Semantic graphs allow new knowledge to be incorporated on the fly by modifying the ontology directly or by accumulating statistics about one or more (ongoing) observations. The result is a robust and flexible tool for hypothesis testing and belief-based planning that is fully operational at the edge. Additionally, reasoning capabilities grow over time through the use of zero-cost transfer learning, which enables customers to update knowledge bases on the fly with new agent behaviors and COAs learned from experienced scenarios.