----- Experience in your own room the magical nature of stereo sound -----

 

 

What's new

 

LX - Store

 

BLACKLIST

 

Conversations
with Fitz

 

OPLUG
Forum

 

Basics

The Magic in 2-Channel Sound

Issues in speaker
design

Dipole models

Active filters

Amplifiers etc

Microphone

FAQ's

Loudspeakers

Crossovers

Room acoustics

Stereo Recording and Rendering

Audio production

Conclusions

 

Projects

Your own desig

LXmini

LXmini+2

LXstudio

LX521.4

LX521
reference

ORION
challenge

ORION-3.4

PLUTO-2.1

WATSON-SEL

PLUTO+
subwoofer

THOR
subwoofer

PHOENIX
dipole speaker

Three-Box active
system (1978)

Reference
earphones

Surround
sound

Loudspeaker
& Room

 

Resources

Publications

Sound recordings

Links

Other designs

My current setup

About me

Site map

 

HOME

 

------------------
Digital Photo
Processes

 

------------------
The
Sea Ranch

 

------------------
My Daughter
the Jeweler

 

What's new

 

LX - Store

 

Conversations
with Fitz

 

OPLUG
Forum

 

 

 

Recording & Rendering

--- Recording & Rendering 101 --- Acoustics vs. Hearing --- Subjective evaluation --- 
--- Room optimized stereo --- Sound reproduction --- Recording what we hear ---
--- Experimental results --- Theory --- SRA --- Sound field control --- 

 

Theory - Mapping from recording to playback 

I will present some background information to better understand the function of the previously discussed 4-microphone array. It is important to know how the "view" from the microphones into the auditory scene at the recording venue is mapped into the "view" that a listener perceives in front of two loudspeakers in his listening room. The microphone array "sees" sound waves arriving from different angles with differing sensitivity. It does not have blind spots, just dimmer regions. It also has bright regions to the sides where all sounds over a wide range of arrival angles become mapped into either the left or the right loudspeaker with little coherence between them. I take the well known ORTF 2-microphone array as an example for mapping sound from the recording site to what is heard at the playback site.     

The ORTF microphone array uses two cardioid microphones. Each has a sensitivity to sound pressure that varies as r = 0.5 + 0.5 cos(a) with respect to the angle a from which sound arrives. The two microphones capsules are separated by 17 cm and are mounted with a 1100 angle between them. Thus their polar response curves have their maximum at +/-550. A sound wave arriving from 450 will produce an output of almost 1.0 from the Right microphone and about 0.6 from the Left microphone. When these two signals are applied to Left and Right loudspeakers, they will produce a phantom signal between the loudspeakers and towards the Right loudspeaker. The separation between the microphones has the effect that sound waves arriving at an angle b will reach one microphone sooner than the other due to the path length difference of (17 cm) x sin(b). With b = 450 the signal will arrive at the Right microphone 354 ms sooner than at the Left microphone. This will pull the phantom image even further towards the Right loudspeaker due to the Precedence Effect of auditory perception. For source angles from 510 to 1600 the combined effect of level and arrival time differences between the microphones will map all signals into the Right loudspeaker. Signals between 1600 and -1600 will be mapped again between Right and Left loudspeakers though they arrive from the rear of the microphone array. The 1020 front section in the polar diagram represents the "Stereo Recording Angle" SRA of the ORTF array. All signals within its range are mapped between Left and Right loudspeakers. The theory behind this mapping operation is explained on the Stereo recording angles page.

Below is a "floor plan" of recording and playback situations to further illustrate the mapping operation.

Stereo playback assumes a symmetrical loudspeaker and listener setup with 600 degree listening angle between the loudspeakers and corresponding to an equilateral triangle configuration. In practice this setup is always within a room with reflective surfaces. The loudspeakers must be designed to illuminate the room uniformly so that reflections can be perceptually ignored and full attention is given to the direct loudspeaker signals which carry the information from the recording venue. 

Presumably the microphone array is set up to capture the source, for example an orchestra, in front of it and also some of the hall reflections and reverberation that are part of the sonic scene and which would be experienced by a listener at the recording site. With the array positioned 1/2 source width W in front of the orchestra and the fixed stereo recording angle of 1020 the whole orchestra is mapped along the line between the Left and Right loudspeakers. The 75% points on this line correspond to an angle of 680 on the recording side. Sources from a to b within this angle are mapped between the 75% points in the playback room.

 Wide regions B and C to the left and right of the microphone array and over a 1090 angular sweep are mapped either into the Left or the Right loudspeaker as dual mono reverberation. No realistic phantom images can be created from these signals. It is not clear to me what the perceptual benefits of these signals might be and I strongly suspect that they give a false rendering of the recording venue since they only contain side reflections and reverberation and no direct signals from the orchestra. 

To estimate how much the mono reverberation is contributing to the total recorded reverberation I look at the sum of Left and Right microphone signals versus the source angle. It is the red curve in the polar diagram. The radial lines show the stereo recording angles. Integrating (L+R) between those lines gives the contribution of each sector. In the ORTF case the mono reverberation makes up 48% of the total reverberation. The 47% of the Front and 5% of the Rear microphone pickup are mapped to somewhat realistic phantom images between the Left and Right loudspeakers.

Contrast the mono reverberation from the cardioids to the pickup from two widely spaced omni-directional microphones as shown in the floor plan above. At 15 feet (5 m) separation and sound arriving from b = 450  the output from the Left microphone is delayed by 10.6 ms compared to the Right microphone. Both respond to the same signal but probably with different amplitude. Left and Right loudspeakers reproduce the Left and Right microphone signals with the original delay and amplitude differences.  Perceptually the signals can become part of a sound stream and contribute to the sense of recording venue space and reverberation. The 4-microphone array is an experiment in mapping a more realistic image of the recording scene to the loudspeakers. It is assumed that the mono reverberation must be minimized, but that directional microphones are necessary to preserve clarity and imaging. 

It seems to me that the arrival time differences due to the 17 cm separation of the cardioids in the ORTF array are primarily effective above a 1 kHz or so where it makes sense to talk about time differences. The phase shift between the microphones in this high frequency range could be much larger than 1800 and thus repeats itself but inconclusively. At lower frequencies the relative phase shift amount is unique though small. I do not know how strongly this affects the mapping in conjunction with the output  level differences from the microphones. Thus the 1020 SRA may only hold for higher frequencies and become larger and approach 1580 for coincident microphones at low frequencies.

 

Blumlein microphone setup

It is interesting in the context of mono reverberation to look at the Blumlein microphone array consisting of two crossed bi-directional microphones without spacing between them. The SRA is +/-360 for this configuration. The angle repeats periodically because of the symmetry of microphones. The mono reverberation is only 4 x 7.5% = 30%. The array exhibits some strange mapping properties, though.

 

Imagine that a sound source moves clockwise around the microphones on a circle with 10 feet (3 m) radius. Starting at L the signal appears to come from the Left loudspeaker. It is centered between the speakers at 00 and then continues to move to the Right speaker up to R. Between R and R2 the signal comes from the Right loudspeaker only. Then it emerges from the Right loudspeaker and moves to the left, getting weaker as it approaches the center line, only to regain strength as it proceeds to the Left speaker and L3. There it will stay between L3 and L4. Then it emerges from the Left speaker and moves back to the Right speaker to R4. Note that a signal from the right side behind the microphones is being reproduced from the left side of the loudspeakers. The left rear between R4 and R3 is mapped into the Right loudspeaker. Between R3 and L2 the signal moves from Right to Left speaker where it stays between L2 and L.  The array has equal sensitivity front and back but low pickup from the sides. I doubt that the Blumlein array is optimal for mapping an auditory scene for stereo loudspeaker playback.  

 

References

[1] Joerg Wuttke, Mikrophonaufsaetze
Kapitel 1:
Grundlagen von Mikrofonen und Stereoaufnahmen (ca. 1,7MB)
Kapitel 2: Betrachtung der Theorien stereofoner Aufnahmetechnik (ca. 0,9MB)

[2] Michael Williams, Microphone arrays for Stereo and Multichannel sound recording, 2004, Vol.1
[3] Michael Williams, Unified theory of microphone systems for stereophonic sound recording, 82nd AES Convention, 1987, Preprint 2466
[4] Michael Williams, Early reflections and reverberant field distribution in dual microphone stereophonic sound recording systems, 91st AES Convention, 1991, Preprint 3155

[5] Helmut Wittek,  IMAGE Assistant 2.0

[6] Listen to audio demonstrations of different recording techniques for different audio scenes in the Schoeps Microphone Showroom

 

--------------------------------------------------------------------------------------------

 

 

What you hear is not the air pressure variation in itself 
but what has drawn your attention
in the streams of superimposed air pressure variations 
at your eardrums

An acoustic event has dimensions of Time, Tone, Loudness and Space
Have they been recorded and rendered sensibly?

___________________________________________________________
Last revised: 02/15/2023   -  © 1999-2019 LINKWITZ LAB, All Rights Reserved