The Traverse Team is leveraging advances in Machine Learning-enabled computer vision to provide higher fidelity scans than that of traditional photogrammetry methods. This technique, called Neural Radiance Fields or NeRFs, has been rapidly transforming the 3D image capture space and could revolutionize the speed and accuracy of 3D asset and scene generation. Traverse and Brown University will extend current research to transform these assets and scenes from point clouds to interactable and modular objects so that they can be used for Air Force and consumer applications. We will investigate and build methods that will address these two limitations. First, we will build a method to directly extract Omni-Directional Distance Fields (ODFs) from multi-view image data (resulting in high fidelity reconstructions). Next, we will build methods that use the ODF representation to directly extract meshes. We will also be able to extract other common 3D representations like point clouds and depth maps. The advantages of our method are an increase in fidelity over other approaches and the ability to represent arbitrary shape topologies.