The U.S. Army seeks to support Mission Commander (MC) situational awareness (SA) and decision-making by developing 3D presentation software, fusing full-motion video (FMV) and related metadata into a unified, high-utility, display. Our Phase I shall combine information from multiple sensors to generate a single three-dimensional (i.e., 3D) visual representation of the battlefield using innovative probabilistic registration and 3D geometry building techniques. Our solution addresses the stated processing performance objectives, strives to wholly satisfy US Army expectations comprehensively, and is well positioned for success within the Army, broader DOD, and commercial industry. Specifically, we shall apply fusion and probabilistic modeling to automatically distill the nuance of registered 3D data from multi-source, multi-format FMV streams. That logical operational structure will: 1. Ingest FMV and strip out KLV tags for timestamp and location for each frame. 2. Probabilistically associate each subdivision of the working region with image pixels across the entire video stream. 3. Statistically combine the volumetric information to hypothesize 3D registration. 4. Refine the camera parameters based on the probability models and EKF results. 5. Refine existing 3D geometry with newly computed values. 6. Render the scene from a common geospatial viewpoint, using textures synthesized from the acquired video as needed.