Deep Convolutional Neural Networks (DCNNs) have become ubiquitous in the analysis of large datasets with geometric symmetries. These datasets are common in medicine, science, intelligence, autonomous driving and industry. While analysis based on DCNNs have proven powerful, uncertainty estimation for such analyses has required sophisticated empirical studies. This has negatively impacted the effectiveness DCNN, motivating the development of a bound on generalization risk. Uncertainty estimation is a crucial component of physical science. Models can be trusted only to the degree their limitations and potential inaccuracies are fully understood and accurately characterized. Best estimates can be wrong. Accordingly, the scientific community has invested a great deal of effort into understanding and benchmarking methods of uncertainty quantification, and we will bring a deep knowledge of those tools and traditions to bear on the the predictions of uncertainty for DCNN. There are parallels between astrophysics and computer vision in that many of the ground truth labels in real-world data are established by human inspection. We will bring the tools and expertise of science to bear on benchmark data in computer vision as well, both for rigor and to create points of reference for a broad community of practitioners and researchers. Generalization risk or error is the difference between the error as found in training or validation and the error that exists in application. In empirical studies, this risk is estimated using a blind test sample. However, doing so is costly when data is limited, and such studies are necessarily incomplete since the blind test sample does not include all the data that the DCNN will be applied to. This motivates the ascertainment and study of a mathematical bound on on generalization risk. In the norm based approach, we have found that the interplay between the frequency behavior of the target function and the NN depth determines its approximation error. In this project we will develop a toolbox which returns a bound on the a priori generalization risk when provided a DCNN topology and example data or some functional description. (This risk will be independent of the training of the sample.) This bound will be applied for ResNet101 and similar DCNN on tasks in computer vision and astrophysics, and the impact of the bound on astrophysics uncertainty analyses will be evaluated. In particular, we will evaluate this uncertainty for classification and regression tasks on images of strong lenses and galaxy mergers. This toolbox will allow improved uncertainty estimation in the domains where DCNNs are used, such as in astrophysics, particle physics and computer vision.