Grass, shrub, and tree all belong to vegetation with similar colors, but the heights can be quite different. Removal of vegetation from digital surface models (DSM) can result in enhanced digital terrain models (DTM). Conventional approaches using LIDAR and radar have achieved some success. However, there are still some challenges for other types of imagers. First, for infrared and RGB images, there is very limited success on vegetation extraction because normalized difference vegetation index (NDVI) is not good enough for differentiating grass, shrub, and tree. More advanced techniques that can utilize image textures are needed. Second, before fusion with low density DTM from LIDAR, one needs to generate high density LIDAR DTM map. Third, accurate fusion of disparate sensor outputs is still lacking. In this proposal, we will address the aforementioned challenges. First, we propose to apply a deep learning based approach to classify different surface vegetation types: grass, shrub, and tree. A recent paper by our team implemented a deep learning based approach to classifying grass, tree, and water for emergency landing site selection for drones. Our approach was able to achieve more than 95% accuracy. The vegetation type information will be used in the vegetation subtraction step, as we can generate a more accurate DTM by subtracting the appropriate vegetation height from the DSM. Second, we propose to apply a proven matrix completion technique to generate high density DTM for LIDAR. We have applied a sparsity based approach to fill in images with 90% or more of missing pixels. Third, we propose to apply data fusion techniques to fuse DTMs from disparate sensor outputs. We plan to carry out two types of fusion. The first type is to fuse DTMs using different techniques from the same sensor type. For example, one may apply 5 different DTM generation techniques to RGB and infrared images and each one has a DTM. We propose to apply pixel-level fusion techniques to fuse the various DTMs. Recently, our team developed a pixel-level fusion scheme to fuse demosaiced images generated by more than 10 different algorithms. Our results showed that weighted fusion and alpha-trimmed mean filtering (ATMF) have significantly improved the overall fusion performance. Another type of fusion is to apply fusion to DTMs from disparate sensors (SAR, LIDAR, and RGB). Now, we can apply Dempster Shafer fusion to merge the DTMs.The outcome of the research is extremely important for accurate digital terrain modeling. The market of our algorithms for commercial applications will be huge. Combined with the proposed novel and high performance algorithms and DigitalGlobe?s existing image mining tools, we expect there will be wide range of applications relating to chemical agent detection, military surveillance and reconnaissance and civilian applications such as fire damage assessment, coastal monitoring, treaty compliance, vegetation monitoring, monitoring of urban sprawl, food and natural resource availability assessment, etc. We conservatively project revenues to be 5 million dollars at 2024. Given the strong expertise of our team, the likelihood for success is extremely high.