Auditory displays that encode spatial audio cues have advanced significantly in recent years, yet, their ability to convey the distance between a sound source and the listener remains limited. This Phase I SBIR project proposes to investigate algorithms capable of adding distance cues to natural and synthesized speech. The particular approach involves manipulation the speaker's vocal effort, which is the quantity that ordinary speakers vary when they adapt their speech to the demands of an increased or decreased communication distance. Vocal effort impacts several time and frequency domain characteristics of a speech signal, such as mean and range of the fundamental frequency, certain formant frequencies, sound pressure level, duration of vowels, length of pauses, and spectral emphasis. Phase I work will start with defining vocal effort level zones and corresponding speech signal characteristics, with relative and absolute distance correspondences to the listener. This will be followed by the design and implementation of signal transforms that detect these characteristics in arbitrary speech inputs and manipulate them for producing outputs with the desired vocal effort level. These modules will be tested and validated by subjective evaluation, e.g., listening tests. A demonstration is planned at the end of the nine-month project