Astronauts suffer from poor dexterity of their hands due to the clumsy spacesuit gloves during Extravehicular Activity (EVA) operations and NASA has had a widely recognized but unmet need for novel human machine interface technologies to facilitate data entry, communications, and robots or intelligent systems control. In this proposed Phase I research, WeVoice, Inc., plans to design, to begin the implementation of, and to evaluate a speech human interface system. Loud noise and strong reverberation inside spacesuits make automatic speech recognition (ASR) for such an interface a very challenging problem. WeVoice proprietary microphone array signal processing algorithms for speech acquisition will be taken advantage of. Pros and cons of beamforming vs. multichannel noise reduction for ASR will be assessed and recommendations for the best front-end technique will be established. Using two ASR programs (one based on HTK and the other in C/C++) that were previously developed at WeVoice, Inc., a number of robust methods (ranging from feature transformation and normalization to environmental adaptation) will be validated. In addition, the feasibility of using throat vibration microphones will be explored. The Phase I research is also concerned with the compromise of ASR accuracy and system complexity. A comparative study will be undertaken between two system implementation structures, namely wearable and distributed systems. This effort will form a foundation for prototype design to be conducted in Phase II.