Yes, you can use headphones to build mixes that translate to a wide range of systems. The place to begin is how to balance instruments across the frequency spectrum.
In our first post on mixing with headphones, we discussed whether it was possible to use headphones for accurate mixing. The conclusion was that yes, you can mix with headphones, as long as you have headphones with a flat frequency response. Now we’ll talk about how it’s done.
WHAT TO LISTEN FOR
In order to mix properly across the frequency spectrum, you have to know what to listen for, based on monitor type. For example, when professional mixers said they used the 5″ Auratone speaker for the brunt of their mixing to ensure mix translation, aspiring mixers ran out and bought them, only to discover that the sound was so different from what they were accustomed to hearing in full-range monitors, they didn’t know what a proper mix on Auratones should sound like.
Fortunately, headphones are not so different from full-range monitors in terms of how we hear the frequency spectrum, but a good deal of that is illusion. It follows that to mix properly on headphones, you have to know a little bit about how they produce sound and more importantly, what to listen for.
HOW TO BLEND KICK DRUM AND BASS IN HEADPHONES
Due to the laws of physics, small headphone transducers can’t reproduce the sounds of bass and kick drum with the impact you would hear on larger speakers in a room. However, that doesn’t mean that you can’t mix low-end instruments effectively. The thing to listen for is clarity and presence of kick and bass in relation to the other instruments.
With bass guitar, you’re mainly hearing harmonics an octave above the fundamental. As such, rather than deep bass, you want the bass to be audible. The first- and second-order harmonics should be full and present. With kick drum, you’re mainly hearing attack more so than the deep resonance of the kick drum.
HOW THE PROS DO IT
There are a number of philosophies regarding the blending of bass and kick drum. Eric Serafin, aka Mixerman (Pharcyde, Foreigner, Ben Harper), states that one should appear on top of the other in the sound field. Happily, the positional effect is easier to tell in headphones than on speakers. As mix engineer Andrew Scheps (Red Hot Chili Peppers, Adele, Jay-Z) puts it, “One has to rule the low end.” Whether it’s bass or kick drum doesn’t matter. That means that one has a preponderance of low frequencies. According to producer-mixer Michael Wagener (King’s X, Metallica, Alice Cooper), kick lives at around 60Hz, bass at 100Hz. To separate them, use a low-cut (or high-pass) filter on the bass below 100Hz with a steep filter. This also implies that bass will tend to sit on top of kick.
Since the deep resonance we like to feel coming from speakers doesn’t occur in headphones, you simply have to resist mixing the bass and kick too loudly to compensate. Once you have the bass and kick drum working together nicely, all that remains is to switch to full-range monitors and adjust the lower frequencies.
BALANCING THE MIDRANGE
With bass under control, it’s time to move on to the midrange. Since our ears are mostly attuned to midrange sounds and most of our instruments live there, it follows that the midrange requires special attention. Headphones make it easier to balance the mids, since you are not distracted by heavy bass and bright high-end. In fact, with the mids properly dialed in, bass and treble frequencies need only be in the ballpark for your mix to sound the same everywhere.
Midrange instruments tend to fight for attention. The goal is to make sure that all instruments can be heard, yet still work together as a unified whole. One of the more commonly advised approaches is the use equalization to “carve out space” for various instruments to occupy. In reality, scooping out frequencies of one instrument to make room for another can actually have the opposite effect and soften the instrument or vocal you want to poke through at that frequency. Remember that frequency is not pitch. Nor does it remain static throughout a mix. Carving out frequencies is a moving target and only works if both instruments are playing in the same range at the same time.
HOW TO EQUALIZE FOR BALANCE
Another approach is to use “subtractive EQ,” to cut low midrange frequencies dramatically to make room in the mix. While it is easier to cut instruments to make space and depend on the ear to make up the difference, thinning out instruments can come back to bite you, particularly if an instrument you’ve thinned out becomes a featured instrument later in the mix.
So, how do you EQ the midrange to make instruments stand out? Here is what not to do. Do not equalize an instrument in isolation and then drop it back in the mix. Do not sweep frequencies; doing so just confuses your ears, since they can’t retain the differences between the frequencies you swept. Now, here is what to do:
1. Listen to instruments in context.
2. If you are trying to EQ a guitar and piano to fit together, listen to the instrument you are not equalizing. If you’re EQing the guitar, listen to the piano until you can hear both instruments clearly. Oddly enough, it will feel as though you are EQing the piano.
3. Don’t sweep. Use your best guess and choose which frequency range you think needs boosting or cutting. If it doesn’t help, try another. As you do this, you’ll be training your ear to hear the effects of EQ across the frequency spectrum. Sweeping cannot do this.
4. Remember that mixing is often counterintuitive. For example, mixers will look for “ugly” frequencies to cut to make an instrument sit better in a mix. However, sometimes boosting an ugly frequency is exactly what you need. For example, to bring out a timbale in a mix, boosting its honkiest frequency (1kHz) actually gives it clarity in a busy mix.
HOW TO EQUALIZE WITHOUT EQ
EQ tip number four brings up an important point about making instruments stand out in a mix. Equalization is not always the best tool. EQ adds amplitude at various frequencies, which eats up headroom. Using distortion, on the other hand, will make an instrument stand out without boosting frequencies, while leaving space in the mix. Distortion generators come in hardware form, such as Thermionic Culture’s Culture Vulture, or software, such as the Soundtoys Decapitator. However, if those are out of reach, most DAWs come with a distortion plug-in. Play with it. You’ll find a little tube overdrive or a tiny bit of ring modulation will go a long way. Distortion is a great way to bring toms out in a mix where EQ often fails.
Another trick is using an enhancer or exciter, such as the Aphex Aural Exciter. The Aphex splits off high frequencies, compresses them, and mixes them back into the signal chain. Exciters are great for bringing acoustic guitars forward in a mix without taking up the space the EQ would.
If you don’t have an exciter, you can make your own with a compressor plug-in and high-pass filter. Copy the track you want to enhance and use the high-pass filter to cut everything below 6kHz. Insert the compressor on the track, use a high compression ratio, low threshold (so the compressor acts continually, and boost output gain. It won’t sound like much by itself, but when you blend that track in with the original instrument track, you’ll hear the difference—eh voila, instant enhancement.
LET’S GET HIGHS
High frequencies don’t require special attention the way bass does. In fact, once you have the lower frequencies under control, the highs pretty much take care of themselves. Digital recording handles highs quite well without any help. The constant boosting of highs in past years was to counteract the effects of high-frequency loss from tape passing over record/playback heads repeatedly. In fact, if you were to put hit songs from different decades through a spectrum analyzer, you’ll discover that despite sounding bright and clear, they have less high-frequency content than you would expect.
One thing to avoid is boosting the same high frequency (e.g. 10kHz for “air.”) on several tracks, as it will effectively attenuate high frequencies above it, making for a dull-sounding mix. High-frequency buildup is why mastering engineers will de-ess a mix between 9kHz-13kHz, which, despite reducing high frequencies, has the effect of making the mix actually sound brighter. This is due to the masking effect of high-frequency buildup. It sounds counterintuitive, but reducing highs on certain instruments can make them clearer in the mix, especially when they’re not all fighting each other at the same frequency.
1. Getting control of bass frequencies in headphones means achieving clarity in relation to the rest of the frequency spectrum, Remember, what you are hearing is not the fundamental, rather the first and second order harmonics.
2. When it comes to EQing the mids, don’t sweep. Always listen in context, and in particular, listen to the instrument(s) you are not equalizing.
3. Use distortion or enhancement to bring instruments forward in a mix without losing headroom.
4. Resist boosting high frequencies, especially at the same frequency, which will have the adverse affect of masking highs above that frequency, thus making the mix sound dull. Conversely, boosting highs can make a mix harsh.
That wraps up part two of the series on how to mix with headphones. Next, we’ll talk about differences in perception of the stereo field in headphones and studio monitors and how it affects panning choices.
Barry M Rivman • exclusively for Sonarworks
Barry is owner and chief engineer of Sound Suite Studios in Medford, Oregon, and former senior pro audio staff writer for Musician’s Friend.