The world of parallel processors is seeing a significant lag between hardware and software. The problem faced by parallel programmers, akin to that faced by early assembler program writers, is the lack of adequate compilers, which would insulate programmers from the low-level architectural details, contributing to rapid program synthesis and portability. The first steps toward this goal have been taken by the do-loop parallelizing compilers, used predominantly for parallel FORTRAN. A more generalized approach, for pointer-based languages, is presented in the curare work. In this proposal we intend to incorporate both these techniques and further this goal through the use of our compiler expertise. We shall investigate the use of declarations to aid the compiler in automatic detection of parallelism. Further, we will pipeline and parallelize the compiler itself, thereby further increasing software productivity. It is contended that an efficient machine-independent interface that relieves parallel programmers of having to deal with architectural details will greatly enhance the writing of parallel programs, regardless of application. Anticipated benefits/potential commercial applications - achieving the goals in Phase I will provide a foundation for further compiler parallelization, performance tuning, and integration of more parallelized programming productivity tools (such as declarations) in Phase II. Given that an efficient parallel compiler would be the single most important factor in bridging the gap between parallel hardware and software advances, all applications using parallel languages would benefit from this work, including parallel databases, parallel implementation of neural networks, parallel image and speech processing, distributed al and expert systems and other parallel applications.