Seismic, hydro acoustic, and infrasonic (SHI) analyses possess many desirable properties, such as long-distance wave propagation and attributable spectral characteristics. They have been, by large, relegated to capturing massive explosions, earthquakes, bolides, and space-bound rocket launches. Although machine learning techniques have been employed on various military and commercial projects, rarely has there been an attempt to marry the adaptive capabilities of the machine learning algorithms with all three sources that surround events of interest. Our revolutionary approach, proposes to develop augmented machine learning techniques that inherently leverage the existence of all three sensor data types. The inclusion of each data type is of utmost importance, where event characterization and development of propagation models have clearly demonstrated the impact of source types, atmospheric conditions, and terrain geology on acquired SHI waveforms, respectively. This is contrast to most contemporary methods that look at one or at most a few features or discriminants to make a decision on source type, based on physical modeling or observational. Our approach utilizes both, the sensor data type when available or perform without it for a strictly automated solution. The innovative technical merits we propose have real value that is direct applicable to support DTRAs mission toTime series,machine learning,Pattern recognition,Seismic,hydro acoustic,Infrasonic,signal processing