This project attempts to develop an architecture that will enable us to transition multimodal (speech and sketch) technologies for a variety of C3I tasks to the DoD. For example, users will be able to create courses of action, collaborate with other users, invoke simulators, etc. by speaking and sketching on tablet computers, PDAs, wearable, wall-sized, and paper-based systems. In virtue of a multiagent architecture and interoperation frameworks (e.g., the CoABS Grid), advanced interface technologies will be able to interoperate with DoD information systems. Phase I will involve analysis and extension of our multimodal architecture, particularly to support "intelligent paper." It will also involve designing experiments to assess the strengths and weaknesses of multimodal technology in the field. Phase II would then involve further development of the multimodal architecture and conduct of those experiments. If this research and development effort is successful, warfighters will be able to interact with command and control systems using speech and sketch. They will be able to do so in a variety of circumstances, and with equipment in a variety of form factors. Notable among these are tablet computers, PDA, and intelligent paper. The latter will offer ultra portability, the resolution and well understood failure modes of paper, but will also offer the benefits of digital systems. Users will be able to save substantial time in interacting with existing C3I systems, such as MCS, and will be able to transition from a method in which both paper maps and computer systems are used. Rather, employing just "intelligent paper," the user will be able to get both sets of advantages simultaneously, thereby halving the workload. Thus, we anticipate being able to overcome warfighters' resistance to adopting digital systems by providing an interface that does not fail, and engenders confidence.