December 19, 2018 – You know the age old question “If a tree falls down in the woods, and there’s nobody there to witness it, does it make a sound?”
The study of Psychoacoustics attempts to answer that question and provide a deeper insight into why we perceive sound the way we do. The study falls under the larger scientific category, Psychophysics, which deals with all forms of human perception. The core idea is that sound is not entirely a mechanical phenomenon, rather a perceptual event, and until vibrations reach the ear to be decoded by the brain, they are merely vibrations moving through the air.
Understanding the core principles of how sound is perceived by the ear will greatly improve the way you mix and produce music, let’s discuss some of the main principles of Psychoacoustics and how to use them for our benefit.
Localization & the Phantom Image
“Sound Localization” describes the process of the brain decoding information from the environment and pinpointing where in space those sounds derive from. In nature, dolphins and bats use a similar technique to navigate, some humans have even tuned their auditory systems to take over where visibility is reduced.
Because humans have an ear on each side of our heads, a sound that is presented at equal loudness to both ears, will result in our brains placing it head-on infront of us. The resulting image in our minds is called the “Phantom Image” or “Phantom Centre” as it presents a way of tricking the mind into believing that a sound source is originating from a location that is not generating any sound.
For the mixing engineer, understanding and using this principle can make-or-break a mix, being able to change the placement of sounds in the stereo spectrum is very important in getting a “wide” mix and it can really improve the audibility of each instrument in the arrangement. You generally want your lead vocals to sit right up front and centre in the mix, so making sure the vocal signal is reaching the left and right ear at the same time and same volume will help that lead vocal sit in the centre and cut through the surrounding elements.
When mixing an acoustic drum kit, a technique that can really improve the accuracy of the resulting stereo image, is to place each element of the drum kit as it would sit in the physical space. Mix the kick drum to the centre, snare ever-so-slightly to the right of that, with the hihats a bit further to the right. Mix the ride cymbal to the left, and spread the rest of the cymbals and toms across the stereo space. The idea is to be able to close your eyes and visualize the drum kit standing in front of you, with each element localizing accurately from where it is placed.
Equal-Loudness Contours
The study of equal-loudness contours helps us to understand the nuances of the human ear, more importantly, how the ear reacts to different sound pressure levels. Understanding how the human ear works is one of the most important aspects of being able to produce a balanced mixdown. Through a series of experiments researchers were able to deduce that the human ear has different sensitivities at different frequencies, furthermore at varying amplitudes this curve changes.
On average, the human ear is most sensitive to frequencies around 3.5 kHz, with a sensitivity in the range of 1-5 kHz which happens to correspond with the fundamental range of the most audible part of the human voice. This range is also occupied by the electric guitar, it’s no wonder our ears pick up vocals and guitar sounds so well.
If you look at an equal-loudness contour, you’ll notice that audio at 90 dB has the flattest response curve. Not only does this give us an idea of why music sounds different at different volumes, but it also gives us a pretty accurate idea of what volume will give us the flattest frequency response. While 90 dB is probably way too loud for your average bedroom studio setup, understanding the principle will greatly improve how your work.
If you take all of this into account, for both your own ears and the ears of your listeners, it will help you to create a better mix and help to understand what the listeners are most likely to hear. It also helps you to understand how your mix will translate on a larger/louder system.
Frequency Masking
Frequency masking deals with how the ear groups bands of close frequencies together, and more specifically how large amounts of information in small frequency bands tends to become inaudible. Compression codecs like MP3 take advantage of frequency masking by removing frequencies that the algorithm deems irrelevant in the said material, the result is obviously quite lossy, however for online and streaming platforms MP3 is one of the things we have to deal with.
In mixing, understanding frequency masking will help to create separation in a mix. The trick is to let sounds occupy their own space, if you have a kick and bass sound hitting at the same time, you may want to EQ some low-end out of one of them or apply sidechain compression to make sure they’re not occupying the same space at the same time. Conversely, if you want to layer sounds together, and you want them to feel homogenized, it may be a good idea to group them together within a specific frequency band.
Missing Fundamental
There’s a phenomenon in the study of Psychoacoustic that is referred to as the “missing fundamental”. The theory states that the fundamental tone of a harmonic sequence does not determine what our ears perceive as the pitch. In science it is widely accepted that our brains process the harmonic information in the overtones and calculate the fundamental frequency of that specific sound based on the space between those overtones. While everyone’s ears perceive sound slightly differently, it is widely regarded that the more harmonic content that is presented to the ears, the easier it will be for the brain to calculate the fundamental, even in situation where the fundamental is removed completely.
Thankfully most real-world sounds are made up of complex arrays of harmonics, and thus removing single harmonics won’t affect the overall pitch. When you take this phenomenon into account, it makes for an interesting way of placing your sounds in the frequency spectrum. You may not worry too much about removing some subs from a bass guitar in order to get the kick to sit better, because the listener’s ear will still calculate the subharmonic of the bass guitar anyway.
In synthesis and sound design it creates interesting ways adjusting the timbre of your sounds, specifically if you’re using additive or spectral synthesis. A large amount of subharmonic synth plugins work using a similar concept, although after removing the fundamental tone they replace it with a clean sine wave. This is a great way to revitalize the low end on a signal that may be a bit weak.