The Federal Government and Private Industry have invested heavily in the development of AI analytical tools, winning great returns in terms of scientific knowledge gains and data analysis throughputs. The size of data sets used in training is exponentially increasing, while the cost, security, and speed of access to public cloud storage is not improving. Off the shelf AI tools coupled with cloud-based data sets in S3 buckets perform poorly and end up being extremely expensive. On the other hand, the convenience of Cloud is that it is possible to easily spin up new VPS instances with very different configurations and storage links. This is in direct contrast to the tightly controlled infrastructure management policies in place at most HPC/supercomputing installations. Optimal use of AI often requires revising both the AI tool and the underlying data configuration on the fly; the need to spend weeks or months documenting infrastructure changes creates inefficiency for on-premises AI on big iron. What is needed is a fully automated IT infrastructure orchestration tool that always provides known optimal solutions for any high-speed data configuration. The proposer builds automated, bare metal, network and application orchestration tools designed to maximize performance while making application, network, and HPC deployment easy for all users. The proposed project will implement bare metal automation and configuration of mixed systems with high- speed data fabrics, block storage, and high-speed network filesystems as a service that is directly accessible to the cluster service manager. This system will not only automatically configure the system in the optimal manner, it will maintain logs of system changes that can be correlated to performance for ongoing human oversight. The proposed phase I effort will demonstrate the feasibility of an automated high-speed data/storage orchestration tool that runs as a service, allowing for seamless scaling from single-CPU instances on workstations (for preliminary investigations) to massively parallel supercluster implementations. Phase I will show the implementation on top of NetThunderâs existing infrastructure orchestration tool. NetThunder develops proprietary algorithms, tools, and software to address the need for autonomous infrastructure and AI infrastructure automation, which according to Tractica is a $38 Billion market that is growing. The growth in AI market is driven by the rapid growth of on-demand cloud applications, which has grown from a few $B market in 2008 to nearly $160B today, and the expectation across all industries that AI implementations can be provided immediately on demand is a technical challenge. If carried over into a phase II, the proposed project will result in a commercial product capable of implementing high speed network data fabrics and storage for AI on on- premises HPC within 12 months. The project takes advantage of extensive development on the core controller platform to create a tool that allows users to setup and manage IT infrastructure from a single pane of glass