The broader/commercial impact of this Small Business Innovation Research (SBIR) Phase II project seeks to improve robotic interactions with the humans. Currently, robots are involved in large sectors of society including logistics, manufacturing, autonomous navigation, video communication, remote supervision of complex mechanical maintenance/repair tasks, support in battlefields and disasters, and interactions in various training, educational, and interventional scenarios including telemedicine. This technology may offer more effective automation in the workplace through higher quality 3D sensing, greater precision visualization and increased worker quality of life. The technology addresses precision and reliability of passive 3D scene measurements. This Small Business Innovation Research (SBIR) Phase II project addresses the acquisition of reliable and precise three-dimensional representations of a scene from passively acquired image data for use in navigation, grasping, manipulation, and other operations of autonomous systems in unrestricted three-dimensional spaces. This technology has been a long-standing challenge in the computer vision field, with many efforts providing adequate solutions under certain conditions, but lacking applicability across a breadth of applications. Other approaches typically deliver inaccurate results where there are, for example, repeated structures in the view, thin features, a large range in depth, or where structures align with aspects of the capture geometry. Based on the matching of features across images, current technologies fail when features have similar appearance. This technology removes the uncertainty of this process through a low-cost use of over-sampling, using a specific set of additional perspectives to replace the ?matching? with deterministic linear filtering. Increasing the reliability and precision of 3D scene measurements will open new opportunities for robotic interactions with the world. Success in this project will advance the underlying light-field technology to broader application areas where human-in-the-loop operations using artificial reality/virtual reality (AR/VR) or mixed reality (such as remote collaboration and distance interaction) depend on accurate and responsive visualization and scene modeling, reducing influences of vestibular and proprioceptive mismatch that can cause disruptive effects such as nausea.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.