We propose to design an integrated robotic architecture with natural language dialogue as well as high-level planning capabilities that will allow robots to interact with humans in unprecedented naturalness. The design will be based on our DIARC architecture that has been demonstrated to handle natural spoken interactions robustly in search and rescue tasks. We will research and evaluate the state-of-the-art in integrated natural language processing and dialogue in robotic architecture and gesture-based interactions. The proposed design will build on this evaluation and provide a complete solution that can be transferred to a great variety of autonomous vehicles. Specifically, it will feature dynamically adjustable autonomy due its ability to explicitly represent goals and use reasoning methods to determine necessary actions when the human commander is busy or has high cognitive load. It will allow robots to understand natural spoken instructions and robustly deal with disfluencies and speech errors that are typical for spontaneous speech, also filling in details for instructions that are not explicitly expressed using integrated planning mechanisms. We will also investigate the utility of additional non-linguistic interface mechanisms such as gestures to support natural language interactions for cases where natural language commands are not possible.
Benefit: The anticipated benefit is a general complete control architecture that is applicable to large number of mobile platforms and can be easily adapted for different tasks and language interactions. The commercial potential transcends the military domain (e.g., the architecture could be used to control autonomous construction vehicles, farming equipment, service robots or entertainment robots).
Keywords: design of integrated cognitive architecture, design of integrated cognitive architecture, natural human-robot interactions, natural language dialogue, linguistic and non-linguistic interactions, spoken natural language instructions, gesture-based interactions