Significant advances have been made in representational and reasoning power in AI systems. However, performance issues continue to be a major concern in applying these advances to real-world projects. Automatic memoization is a technique for mechanically converting existing functions into ones that cache (or "memoize") their results. It has offered the promise of dramatic speedups in many application, but the practical issues needed to make it useful in large, high-performance systems have not previously been addressed. The prototype Common Lisp Automatic Memoization Package (CLAMP) was developed for the ARPA Signature Management System (SMS), a decision aid for submarine crews. This system needs to provide timely situation assessment and recommendations to the crew based on various sources of data. Response time and predictable performance were critical, and CLAMP was developed to address those issues. The cumulative effect of the application of automatic memoization resulted in more than a 100-fold speedup of the top-level calculations, and more than a 1000-fold reduction in the amount of temporary storage (garbage) that needed to be reclaimed at runtime. Similar results were obtained in other applications. We propose to take the prototype system and investigate two issues relevant to transforming the system from a research tool into a viable commercial package for software developers; (A) extensions and usability issues suggested by the experiences of users in five countries, and (B) the feasibility of porting the system to a lower-level but more widely used language like C++.
Keywords: real-time applications high-performance systems artificial intelligence optimization dynamic