Phase II year
2017
(last award dollars: 1709566969)
Phase II Amount
$1,499,844
Development of a cockpit speech understanding agent for integration with the advanced human-machine interface in the ALIAS program is proposed. Speech recognition engine, grammar and dictionary developed under previous research efforts will form the basis for the proposed development. The speech understanding agent will semantically decode cockpit voice communications between the pilot-in-command and the first officer, as well as between the cockpit and the ground. Additional cockpit speech corpus and noise recording will be derived from a variety of sources to enhance previous datasets. A database will be assembled to accumulate additional training data collected during operations to enable continuous improvements to the speech understanding agent. The performance of the system will be evaluated under realistic cockpit noise conditions involving multiple pilots and controllers. The speech agent will then be integrated with the human-machine interface system being developed under the ALIAS Phase III program. In addition to its application in the ALIAS program, the speech understanding agent developed under the proposed research can play a significant role in NASA Reduced Crew Operations program, and in enhancing the cockpit automation systems in present day aircraft. Phase III work will commercialize the technology to various military and civil aircraft systems integrators.