The Jeffress model (Jeffress, 1948) describes a neural mechanism for the brain
to detect very small differences in the time of arrival of sound at one ear compared to the other,
and thus determine the horizontal (azimuth) origin of the sound. It operates through a combination
of coincidence-detecting neurons and axonal delay lines.
Follow the tabbed panels below from left to right. The first two describe the basic mechanism,
the last two two deal with the more advanced topic of phase ambiguity and its resolution.
[Note that the cartoons are qualitative illustrations of the phenomena, not accurate quantitative simulations.]
- ITD
- Basic localization
- Phase ambiguity
- Resolving phase ambiguity
The spatial separation of ears means that sound from an off-centre source will not arrive simultaneously at both ears,
leading to an interaural time difference (ITD).
Click a sound source to start
suggest try clockwise
from top
e.g.
-----------------
100 mm
speed of sound in air
=~ 343 m s-1
Sound from directly ahead arrives
at both ears simultaneously
ITD = 0 ms
Sound arrives at
left ear
Sound arrives at
right ear
Sound has to go 71 mm further to get to the left ear.
ITD ~= 0.2 ms
Sound arrives at
left ear
Sound arrives at
right ear
Sound has to go 100 mm further to get
to the left ear.
ITD ~= 0.3 ms
This is the greatest possible ITD for this head size.
Sound from directly behind arrives
at both ears simultaneously
ITD = 0 ms
ITD ambiguous with sound from directly ahead
ITD ~= -0.3 ms (minus 0.3 ms)
By convention sound from the left
has a negative ITD.
ITD ~= -0.2 ms
[Have you tried the top-right source?
It has more explanation.]
Delay lines and coincidence detectors convert a time code (inter-aural time difference [disparity]: ITD) into a line-labelled space code
Click a sound source to start
nucleus laminaris (bilaterally paired)
coincidence detection layer
sound arrives at left ear
sound arrives at right ear
sound arrives at left ear
left signal already part way
through coincidence detector array
sound arrives at right ear
sound arrives at left ear
sound arrives at right ear
right signal already part way
through coincidence detector array
Pure tone (single frequency) sounds result in phase ambiguity - phantom sound sources indistinguishable from the real one
Click the sound source to start
Afferents in left and right ears are phase-locked to the sound but do not respond to every cycle. It is random which cycle either side responds to.
only right responds to this wavefront
by chance, left and right respond to same wavefront
only left responds to this wavefront
sound arrives at left ear
sound arrives at right ear
Broad-spectrum (noise) sound resolves phase ambiguity because only the correct locality sums at all frequencies at the space-map integrator.
Click the sound source to start
Each ear has multiple detectors (afferents) tuned to different frequencies. Each detector feeds into a separate coincidence detecting layer.
The multiple coincidence detecting layers feed into a single space-map integrator that sums input from all the layers on a point-by-point basis.
Just two frequencies are used in this illustration.
high freq
short wavelength
note wavelength of low frequency tone
and shorter wavelength of high frequency tone
actual sound pressure waveform (= low + high)
left and right respond to same wavefront for low and high frequency
only right responds to wavefront for low and high frequency
summed input from coincidence detecting layers generates a spike in the space-map integrator neuron at the correct location for this sound source
left responds to next high frequency wavefront
NO spikes occurred in any space-map integrator neuron when left-right afferent responses are offset by one wavefront
left responds to next low frequency wavefront
high freq
short wavelength
wavefronts offset just for illustration clarity
low frequency layer activates space-map neuron
high frequency layer activates same space-map neuron
subthreshold EPSP with wrong localization
coincidence in just low frequency layer
coincidence in just high frequency layer
Click the sound source to re-start
Thus:
When frequency-specific afferents at left and right ears respond to the same wavefront, the coincidence detectors all respond with the correct localisation. These sum at the integrator to produce the correct space-map response.
When left and right afferents respond to different wavefronts of the same frequency, then coincidence detectors show phantom responses, but these are usually at different locations at each frequency, and fail to sum at the integrator.
high freq
short wavelength
Phase ambiguity is resolved.
The Jeffress mechanism was originally completely
hypothetical, but there is now strong evidence that something like this operates
in the nucleus laminaris in birds (Carr and Konishi, 1988). The mechanism may also operate in the medial nucleus
of the superior olivary complex in mammals, although this is more debatable (Grothe et al., 2010).
References
Carr, C.E. & Konishi, M., 1988. Axonal delay lines for time measurement in the owl’s brainstem. Proceedings of the National Academy of Sciences of the United States of America, 85(November), pp.8311–8315.
Grothe, B., Pecka, M. & Mcalpine, D., 2010. Mechanisms of Sound Localization in Mammals. Physiological Reviews, 90, pp.983–1012.
Jeffress, L.A., 1948. A place theory of sound localization. Journal of Comparative and Physiological Psychology, 41(1), pp.35–39.
More tutorials from the author.