My Take on Music Recording with Doug Fearn

Audiophiles Guide to Music Recording - Part 1

January 18, 2024 Doug Fearn Season 1 Episode 87
My Take on Music Recording with Doug Fearn
Audiophiles Guide to Music Recording - Part 1
Show Notes Transcript

My Take on Music Recording is primarily aimed at people in the professional recording world, but there are a significant number of listeners who are music lovers and audiophiles. This episode provides an overview of the recording process for them. However, I think even people in our profession might enjoy how I attempt to explain the recording studio process in layman’s terms.

This reflects my experience and how I work as a producer and engineer. I tend to carry over the tools and techniques that I have learned over the last five decades. They work best for me and my style of recording. I know that there are other approaches, and I try to acknowledge and explain those, too. But my focus is on what I do, which isn’t always mainstream.

There is a lot to cover, so this topic is split into two episodes. I will publish the second half next week.

Your feedback on these episodes is especially interesting to me. Tell me what you think.

As always, thanks for listening, commenting, and subscribing. I can always be reached at


87     Audiophile’s Guide to Music Recording                                   January 18, 2024



I’m Doug Fearn and this is My Take on Music Recording


This podcast is usually made for people who are in the professional recording industry, but today’s episode is mostly for audiophiles. It takes a look at what goes on in the recording studio. I believe this is somewhat mysterious to many music lovers, who may know a huge amount about the listening side of things, but have never really understood the process that creates the recordings they love.

Most music lovers have an image of the recording studio from what they see in movies and in videos. The reality is often quite different.

The recording process is a complex topic, so my explanation is going to be somewhat simplified and incomplete. If listeners want to know more about some aspect of this, please drop me an email telling me what you would like to learn more about. You can send it to

And you might find some of the previous 86 episodes interesting, especially those that go into more depth on many of the topics covered in this episode.


First, a little history.

We think of audio recording as starting with Thomas Edison’s invention of the phonograph. Edison was great at turning ideas into commercial products, but like most inventions, the phonograph was based on years of prior research and experimentation.

Edison’s phonograph used cylinders as the recording and playback medium. The recording media was made from wax or a metal foil. In either case, a stylus engraved a continuous spiral groove around and across the cylinder. Sound was collected by a large horn with a diaphragm at the narrow end, and this was physically coupled to a cutting stylus. It was an entirely mechanical system. It never sounded very good.

The cylinder was soon replaced by a disc, which had certain advantages, like using less shelf space for storage of the records, and easier replication, but also introduced some new shortcomings, like decreasing linear speed as the stylus moved towards the center of the disc.

The crude early recording systems didn’t stop them from being a commercial success. People raved about its “perfect” reproduction of the original music.

It wasn’t perfect, of course. And I would suggest that we still have a long way to go to achieve perfect reproduction of music. However, we can create effective recordings with the technology we have today.

The system remained entirely mechanical until the late 1920s when it became electro-mechanical. A microphone picked up the sound instead of a horn. The microphone signal was amplified by vacuum tube amplifiers so that it could drive an electromagnetic cutting head coupled to the cutting stylus.

That system has remained essentially the same for nearly 100 years, although there have been significant improvements in the sound quality. Stereo records became practical in the late 1950s, for example.

In 1982 the compact disc brought digital audio to the consumer. CDs had several advantages over vinyl discs and quickly replaced vinyl as the physical medium of choice. Still, there is something magical about the sound on a vinyl disc. I know many people love the sound of vinyl, and it has a great sound when done well, but for those of us who compare the original recording to the LP, there is a world of difference. This is a topic for a future episode.

Today, the CD format has passed its peak of popularity, and vinyl albums actually outsell CDs now. Despite that, the sale of music on physical audio formats is only a fraction of what it was 30 years ago.

Today, listeners can access millions of recordings via streaming services. It’s not high-quality audio because of the data-reduction compression schemes. Those compressed formats are surprisingly good, considering how much of the original data is discarded, but still far from high resolution.


On the recording side, the technology of 120 years ago started with a bunch of musicians huddled around a large horn. The quietest instruments were placed closest to the horn with the louder ones in the back. There was no other way to control the balance. Musicians had to be close to the horn or else the recording level would be too low. This was improved by the “electrical” method of recording.

All recording was monaural, of course. Although there were some fascinating experiments with stereo, it was not an option with the early recording systems.

With the development of vacuum tube electronics, the option to use more than one microphone emerged. This was largely driven by radio broadcasting, and, in fact, through the 1950s most recordings were made with modified broadcast mixing consoles. Or the engineering department of a recording studio built their own mixing consoles.

The pre-vinyl records were made of shellac, and rotated at 78RPM, which limited the time per side to under 4 minutes. The long-playing vinyl record was introduced in 1948 and rotated at 33 and 1/3 RPM. It could accommodate about 22 minutes per side, or a bit more with reduced fidelity.

Tape recording came along in the early 1950s and changed the approach to recording entirely. Now tape machines could have multiple independent recording tracks, which opened the recording process up to more versatile and creative ways of recording. And tape could be edited, assembling the best parts from several performances.

The era of multitrack tape recorders, and mixing consoles developed specifically for the recording studio, started in earnest in the 1960s and reached its peak in the 1990s. After that, most studios converted to digital recorders, which were crude in the beginning, but gradually improved as computer technology allowed higher sample rates and greater bit depth.

Today, most recordings are made using a computer that captures the digital audio on a hard drive, typically 24-bit at a sample rate of 96kHz.

Higher resolutions digital formats exist. DSD captures music in an improved way. However, DSD has little traction in the recording world due to its severe limitations on how the sound can be manipulated. For example, it is not possible to mix the tracks of various recorded instruments and voices in the DSD format. Mixing requires either converting the DSD files to PCM, that is, WAV files, or converting the files to analog and combining them using a conventional analog mixer.


In the Recording Studio

Today, most recording happens in recording studios of various sizes and quality. Let’s look at what actually happens in a recording session.

What I describe here is a traditional way of recording music. There are other approaches that work, too.

First there is a lot of prep work necessary to determine what pieces of music will be recorded. I will call them “songs,” but it applies to classical music as well.

The musicians who will play on the recording are chosen and booked. That might be simple if the artist is a solo act, or if it is a band of players who work together all the time. Or if it’s an orchestra. Other sessions, more than you might imagine, use musicians who have specialized in recording. They are the “studio musicians.”

A studio musician is a unique breed of player, adept at quickly understanding what a song needs, based on years of experience. They often read music, playing parts that are written by an arranger. Other times, the musicians are hired for their creative input. They know how to make a song sound professional and effective.

A producer overseas the entire process. That role varies quite a bit, but the full range of the producer’s input might start with discovering the artist, picking the songs to be recorded, determining the key, structure, and the tempo, if the artist needs help with that. The producer might hire an arranger, if one is needed, and hire the studio musicians if that is necessary. They book the recording studio, and work with the musicians to achieve the goals of the artist and producer. The producer supervises and directs the recording session, decides when the best performance has been captured, and determines what additional parts might be added, and books the players and studio time for that. The producer is involved in combining the finished multitrack recorded parts into a final mix, and perhaps overseeing any additional processes needed to make the music available for distribution to a variety of outlets that deliver the music to the listener.

In the recording world, the role of the producer is often similar to the director of a movie, but without the large supporting staff.

Some artists have a strong idea of what their song should sound like, and convey that to the producer. They work together to achieve that vision of the final product. And some artists act as their own producer, just as an actor in a movie might also be the director.

Some producers do their own engineering, which might include selecting the microphones for each instrument, vocal, or section, placing those microphones for optimum pickup, and doing any sonic modification they think is necessary during the recording process.

Usually, however, there is a separate engineer to do the technical part of the recording. Being a producer, or engineer, requires a certain amount of innate talent that is honed by experience working on hundreds, or thousands, of recording sessions. Many producers specialize in specific musical genres, but some producers love all kinds of music and are comfortable working in multiple music styles. Many are musicians themselves.

Some degree of musical background is practically a requirement for a producer and for an engineer.

In some situations, the producer makes all the decisions during the recording process. But usually decisions are a collaboration between the producer, artist, musicians, and the engineer. Recording works best when the creative team is all on the same page. That requires that everyone understands the goal for the recording. The producer and artist have the responsibility of making sure everyone involved can see the entire scope of the project.


Technical Details

Let’s take a look at the technical details of the recording studio. First, it has to be a space adequate to accommodate all the musicians and their instruments. The studio could be small, the size of a living room, if the artist is the sole player. Or it could be huge, for recording a symphony orchestra.

But even a solo performer can benefit from a larger space to perform, if the sound of the studio enhances the music.

Typical studios for recording pop, rock, jazz, acoustic, or small chamber ensembles have a minimum long dimension of 35 feet. Many are much larger. The length, width, and ceiling height are chosen to minimize room resonances that coincide. Ceiling heights are a minimum of 14 feet and can be much higher.

A studio for classical music recording might be 90 by 60 feet, with a ceiling height of up to 50 feet or more. 

Many classical recordings are done in concert halls, which have good acoustics, but noise intrusion may be a problem.

A studio has to be isolated from the outside world. It must never allow noise from traffic, airplanes, or subway systems to be audible in the recording. Blocking outside noise also means that the room is soundproof enough that the music cannot be heard by any neighbors.

Sound is only stopped by mass, so the walls of a recording studio are typically made from very heavy materials like brick or concrete. The studio has to be airtight, too, since any gap in the solid walls, floor, and ceiling will compromise the isolation.

The design and construction problems are multiplied if there is more than one studio in the same building.

Sound absorbing materials, like fiberglass insulation or acoustic foam, do not block sound. They attenuate the high frequencies effectively, but it is the low frequencies that get through. You need heavy walls to stop low frequencies.

You experience this phenomenon when you are in a different room from where music is being performed or reproduced. What do you hear? The bass drum and bass. Or tympani and double basses. The sound has been low-pass filtered by the walls. All the high frequencies are gone.


Acoustical Requirements for a Studio

A studio also needs to have good acoustical properties. We all know what bad acoustics sound like. The sounded is ill-defined, usually has serious peaks and nulls in frequency response, and there may be inappropriate echoes. The music, or speech, is muddy and indistinct. Smaller rooms will likely have an unpleasant boost in the mid-range frequencies.

The design of a studio starts with choosing dimensions that minimize the inescapable room resonances. These resonances occur as sound is reflected between the walls, floor, and ceiling. All rooms have resonances – it is an inevitable consequence of having room boundaries. The goal is to distribute the resonances between the boundaries so that they are evenly distributed in the frequency domain and never occur multiple times at one frequency. This is the job of the acoustician that designed the space. Failure to control the room resonances gives the room very uneven frequency response. It is not unusual to have 20dB peaks and nulls in a poorly designed room.

There may be ways to control some of the resonances resulting from poor room proportions, like “bass traps” or other resonators tuned to absorb the offending frequencies. But it will always sound better if those problems are addressed in the studio design.

The smaller the room, the more challenging this becomes.

Other considerations include the HVAC system and lighting. The HVAC must not be audible on the recording, which requires careful design of the system. Large ducts and outlets, operating a low velocity, are the basic requirements.

The lighting must be adequate so that all the musicians can see any printed music. The lights are usually incandescent, which are much quieter electrically and audibly than LED or florescent lights. They are usually on dimmers, so that the atmosphere in the studio is compatible with the mood. Special silent dimmers are used.


Studio Layout

In a professional studio, the performance space is separated from the control room with sound isolating walls, windows, and doors that provide at least 60dB of isolation between the rooms. This is necessary so that the engineer and producer only hear the sound from their monitor speakers, and not through the walls. Only then can they make good decisions about the sound they are capturing.

The control room is usually quite a bit smaller than the studio, but it still must meet stringent acoustical requirements if the engineer and producer are to have an accurate idea of the music they are recording.

Most control rooms are set up for conventional stereo monitoring, perhaps augmented by a sub-woofer. In the last decade, many control rooms have been designed for immersive formats, like Dolby Atmos, which can use speakers around the room to provide the listener with a more dramatic and revealing sound. Those control rooms require even more stringent control of acoustics.

The speakers used as control room monitors are much like high-end consumer speakers. There is even some overlap in these categories. But the requirements for a monitor speaker are different, since the goal is to hear every detail, every problem. That often makes monitors speakers sound harsh and unforgiving compared to consumer speakers. Otherwise, the requirements for flat frequency response and low distortion are similar.


Out in the studio, the musicians are arranged to meet several requirements. They must be separated enough that the sound of one instrument does not “bleed” excessively into the microphones on other instruments. This can be done by careful placement in the room, which may be augmented by using moveable baffles placed between the instruments or sections, which attenuate the sound.

An ensemble that is well-balanced acoustically may not need much to provide good isolation. A certain amount of bleed from one instrument into other microphones may actually enhance the sound. But in modern recording, most engineers abhor any sound leakage between instruments. Complete isolation allows later manipulation of the sound without affecting the sound of other instruments. It also allows the option to entirely replace part or all of the performance of an instrument at a later time, if that becomes necessary.


Often, studios have one or more separate isolation booths, which are small rooms sealed off from the main studio. These are helpful in a situation such as recording loud drums along with quiet instruments like an acoustic guitar. Or perhaps a vocalist might be placed in an iso booth so that his or her vocal is free from any instruments. Or, conversely, so that a vocalist will not be picked up by other microphones. This is especially important if the vocal track is going to be replaced with a more carefully produced version later.

For the ultimate in isolation, every instrument and voice can be recorded separately, one at a time. Some people make this work, but I believe that the recording, and the performance, is better if everyone plays at one time.

Another consideration for the studio setup is making it comfortable for the musicians to see each other. This requirement is more serious in some recording situations than in others. The isolation booths usually have large windows, so everyone can see each other to some extent. But they won’t be able to hear each other, so headphones are used to allow the musicians to properly hear one another.

Personally, I prefer to have as many of the players on a session in the room together, in more natural and comfortable locations. I think they make better music that way, even if it makes the engineering task more difficult. I only use isolation booths when there is no other solution.



Microphones are one of the most important tools that engineers have. It would seem that a microphone with the flattest frequency response and lowest distortion would be ideal for every sound. But that is usually not true.

There are hundreds of different professional recording microphone models available, and they all have different characteristics. This is a complex topic, but here are some of the variables that make microphones sound different.

First is the method the microphone uses to convert acoustical energy into electrical energy. Different types of microphones have different inherent characteristic that must be understood if a quality recording is to be achieved.

Studio microphones fall into three main classifications: ribbon, condenser, and dynamic.

The first truly high-fidelity microphone invented was the RCA 44, which was introduced in the early 1930s. It remained in production until the late 1970s, shortly before the demise of the RCA company. It was a ribbon microphone, which relies on a very thin strip of aluminum foil suspended in a focused magnetic field. The sound vibrates the aluminum ribbon, which generates a vanishingly small electric current.

Ribbon microphones have the advantage of having a very lightweight diaphragm, the ribbon, which responds rapidly to the changing pressure of the sound waves hitting it. That provides a transparent version of the sound.

The ribbon microphone principle is as simple as it gets. I am a strong believer in simplicity whenever possible in the recording process, which explains why most of my recording is done with ribbon mics. And that includes any genre of music, from classical, to pop, R&B, jazz, rock, hip-hop, punk, and folk.

The sound of a ribbon microphone is difficult to describe, but I think the best adjectives are “warm” and “natural.” There is a unique quality in the sound of a ribbon microphone that many people love, but they can initially sound dull and unexciting to those accustomed to the sound of condenser microphones.


The second major type of mic, and the kind used on most recording sessions, is the condenser mic. This system uses a very thin Mylar diaphragm, ranging in size from a quarter inch to about an inch, coated with an extremely thin coating of metal, usually gold. The diaphragm forms one of the two plates of a capacitor. “Condenser” is the obsolete term for a capacitor. This is the “capsule” of a condenser mic. Varying sound pressure on the diaphragm causes the capacitance of the capsule to vary, and this is converted through electronic circuitry inside the mic into a varying voltage that follows the waveform of the music.

The electronics of a condenser microphone can be either vacuum tube or solid-state. The output level of a condenser mic is very high, which reduces the need for a lot of preamplification in the recording system.

Almost all condenser mics have a “presence peak,” which boosts frequencies in the 6 to 10kHz range by a few dB. This increases the apparent loudness and adds sparkle to the sound, which may be needed in some recording situations. For example, the presence peak makes a vocal stand out against a background of many instruments. However, the presence peak is often inappropriate for some instruments. It makes them sound harsh.

There are several vintage microphones that have been used for 70 years on many recordings. They are made by companies like Neumann and AKG. They have been used for so long because of the beautiful quality of the sound.


The third main type is called a “dynamic” microphone. It works on the same general principle as the ribbon mic, and, in fact, a ribbon mic is technically considered a type of dynamic mic. But instead of a strip of aluminum foil, the dynamic mic has a plastic diaphragm that supports multiple turns of very fine wire. The coil of wire is immersed in a strong magnetic field, which generates the electrical signal.

A dynamic mic is essentially a tiny loudspeaker, operated in reverse. Instead of converting electrical energy into sound, the mic converts sound energy into electrical energy.

Most dynamic mics have limited high-frequency response, so they lack the sparkle of a condenser microphone. But they are tolerant of higher sound levels than any of the other types of mics, so they can be used very close to high-intensity sounds like drums and electric guitars.

Dynamic mics are most often used in live performance, due to their small size and ruggedness. But they have some limited application in the studio.


In addition to the method a microphone uses, there is also the characteristic of its pickup pattern. This can range from ominidirectional, where sound from all directions is picked up roughly equally; to bi-directional, where the mic is sensitive to sound from the front and back of the mic but not from the sides; to the unidirectional microphones, which picks up sound only from in front of it.

Each pattern has its advantages and disadvantages. Omnidirectional mics are noted for their even frequency response. Bi-directional mics have extremely deep nulls off the sides, which can be useful to reduce the pickup of unwanted sounds. Unidirectional mics, often called “cardioid” mics due to their heart-shaped pickup pattern, are good for suppressing unwanted sounds. But all unidirectional mics have some degree of anomalous off-axis frequency response, which can make the unwanted sounds sound strange and unpleasant. Many directional mics become almost omnidirectional at higher frequencies. This can make any off-axis pickup of other instruments sound shrill and unnatural.

The bi-directional, or figure-8, pattern has some unique characteristics, notably the deep nulls off the sides, top, and bottom of the mic. That is a useful tool. But perhaps the best attribute of the bi-directional mic is its consistent frequency response to sound coming from any direction. This makes any pickup of other instruments, and the acoustics of the recording space, much more natural sounding.

Many condenser microphones have adjustable pickup patterns, switchable between omni, bi, and unidirectional. Some allow continuous adjustment between the basic patterns. I often use my condenser mics in the bi-directional position. I discovered decades ago that I prefer this sound.

Directional mics, including ribbon mics, have what is called “proximity effect,” which emphasizes the low frequencies when the mic is placed close to the sound source. The closer the mic is, the more the low frequencies, below about 300Hz, are emphasized. The lowest frequencies are boosted the most. Different microphone designs have different degrees of proximity effect, but some, like the RCA 44, do not achieve their flattest low-frequency response until they are at least 10 feet from the sound source. To compensate, some ribbon mics have switchable low-cut filters. Others, like the 44, do not and often an external equalizer is needed.

Proximity effect can be a benefit in some situations, but often it is not desirable, so equalization is used to tailor the bass response as necessary.

Even seemingly minor things like the shape of the grill of a microphone, and the size and arrangement of the holes to allow sound in, can profoundly affect the sound of a mic, especially to off-axis sounds.

The engineer has to understand the characteristics of the microphones available and choose the best one for the application. There are many considerations, and the same mic on the same type of instrument played by a different musician may necessitate a change in the microphone used. The mic choice is also often dictated by the type of music.

The distance the mic is from the instrument and voice also affects the ratio between direct and reflected sound. The sense of space is often an important part of a recording. The mic distance is also helpful in creating a feeling of depth in the stereo image.

With more distance, the mic will pick up more of the room sound, for better or worse, and more of any other instruments playing in the studio.

The engineer and producer have to make many decisions about what mic to use, what microphone pattern will work best, how to maintain the necessary isolation between instruments, the amount of proximity effect, and how to create an appropriate sense of space in the recording. Many times, compromises are necessary to address conflicting requirements.



All signals in a recording studio are run on low impedance, balanced lines. The source impedance of a microphone is typically 150 ohms. The shielded, low-impedance, balanced mic lines are relatively loss-free over hundreds of feet. The path from the microphone to the preamplifier could be over 100 feet in most studios, so if a high-impedance system was used, it would have severe high frequency attenuation.

The balanced cables intrinsically reject interfering signals, which could be from electrical circuits, or crosstalk from adjacent lines. This is especially important in our electrically noise-polluted environment. The noise is produced mostly by switching power supplies, which are ubiquitous because of their light weight and low cost. They are found in the power supplies of almost all consumer equipment, from phone chargers, to LED lights, to appliances.

Variable speed motors and consumer-grade light dimmers are other sources of noise pollution.

Keeping that electrical noise out of a recording is a challenge, but properly used balanced cables rarely experience any interference. This type of cable is also largely immune to interference from the AC power line frequencies, and from radio frequency signals.

RF interference from a cell phone is a special problem, because the phone might be very close to the equipment and interconnecting cables. Most studios require every phone be turned off completely. The noise from a cell phone may not get into the recording as obvious interference, but it can raise the noise floor. Microphones are particularly susceptible to cell phone interference.


From the mic, the audio signal goes to a microphone preamplifier, which could be incorporated into a mixing console, or it could be a standalone unit. Mic preamps are critical because if the signal is degraded at that stage, there is no getting it back. For that reason, many recordings use standalone mic preamps, which could be tube or solid-state.

From the 1960s through 1990s, virtually all recordings went through a recording console on their way to the recorder. The console provides a way to combine and mix various microphone signals, and to route them as necessary. They also provided adjustable auxiliary outputs that could be used to provide a mix to the studio musicians via headphones.

Providing the best mix for the musicians’ headphones is an art in itself. An improper mix can result in sloppy playing, or unnecessary stress for the players. A skilled engineer can detect when the headphone mix is not optimum and make changes to help the players. Ideally, the musicians do not even know what changed, but they know they feel better about their playing.


The large recording console of years past is becoming obsolescent in modern recording. These behemoths, which take up considerable space and can cost well into six figures, have been replaced with a software equivalent, known as a digital audio workstation, or DAW. The graphics on the computer screen emulate the appearance of the recording console with its hundreds of knobs and controls. The DAW can be scaled to the exact size needed for the session, and expanded later if the need arises.

The DAW software costs only a tiny fraction of its hardware equivalent. This has made recording affordable to just about anyone. With an analog to digital converter incorporating mic preamps, a home computer, and the DAW software, anyone can assemble a system capable of excellent audio quality for a small fraction of the cost of the console and tape machines of the past. My studio in the 1980s had a 24-track tape machine that cost more than my house.

Having the equipment Is only one part of the art of recording. It takes talent and experience to record well.

Tape is rarely used these days, and the skills for maintaining those machines is largely gone. But some studios still record to tape, for its unique sound. Personally, after a lifetime of recording, half to tape and half to digital, I prefer digital. But I still have a tape machine, should I ever feel the need to use it on a session.


Whether using a large console or its software equivalent, the signals from various microphones in the studio are available for extensive manipulation. One function of the mixing console provides individual equalizers for every input.

The term “equalizer” comes from the telephone system of the 1920s. When radio networks were established, they needed a way to get their program material to their affiliated stations throughout the country. Ordinary telephone lines were used, but they had to be “conditioned” to pass full audio bandwidth material. The phone company called the devices to do this “equalizers” because they made the frequency response equal across the spectrum. It didn’t take long for these devices to find their way into recording studios as a corrective or creative tool to modify the frequency response of one or more audio signals.

For better or worse, it became standard that the audio from each microphone would be “equalized” to fit the needs of the producer and engineer. Often, a better solution is to change the microphone, or its position, but a bunch of knobs right at your fingertips is easier.

There are many types of equalizers used in recording. Some are simple, much like the bass and treble controls on a home system. Others are very complex, with a dozen or more controls to shape the sound. Equalization is not always the best solution, because of artifacts that are introduced. For example, modifying the sound in the frequency domain always introduces a frequency-dependent change in the phase relationship of the sound. Good equalizer design strives to make the phase shift sound natural to our ears.

Some equalizers have been engineered to introduce zero phase shift, but I find that sound to be very unnatural, and I have never found a situation where that type of equalizer sounded right.

Today, most of the equalizers are virtual, created by software and packaged as a “plug-in.” These cost a fraction of the cost of the hardware antecedent. The ability to recreate the characteristics of many different kinds of equalizers, and other effects, is improving all the time as computing power and design algorithms improve.

Still, many engineers prefer the sound of the actual hardware. As close as the plug-in can get to perfectly emulating the hardware, it is still never quite the same.


The recording level of each microphone can be adjusted with a linear volume control called a fader. This allows precise adjustment of the level. In many circumstances, a 1dB change in level of an instrument or voice can have a significant audible effect on the recording during the process of mixing all the sounds for the final product.

The large faders on a physical console are one of its main advantages. It allows the engineer to rapidly change the level of one, or a dozen, inputs. That is far more cumbersome using a mouse.

There are hybrid systems that use software along with a simple hardware interface to provide a few physical faders.


The engineer has other sound modification tools available. A “pan pot” allows the sound of any microphone to be positioned anywhere between extreme left and right in the stereo image. Immersive formats, like Dolby Atmos, extend the pan pot concept to permit placing the sound anywhere in the sound field around the listener.

A compression amplifier may be used to reduce the dynamic range of a sound, or the entire mix. It does this by reducing the level of any sound that exceeds an adjustable threshold level. It is often necessary to reduce the full dynamic range of a piece of music to make an effective recording. Although studio and quality home equipment are capable of over 100dB of dynamic range, that full range is impractical in most listening situations. If the music is heard in a car, for example, the ambient noise of a moving car limits the usable range to about 20dB, even if the original music had a range of 100dB. The quiet parts would be inaudible, and the loud parts too loud otherwise.

Like everything in recording, we must often make compromises to accommodate all the different listeners and listening situations.

In the next episode, I will continue to explore the recording process, starting with what actually happens in a typical recording session. I will talk about different techniques to create a finished recording, and look at the mixing and mastering process. I will introduce the concept of subjective loudness.

Thanks for listening, commenting, and subscribing. I can be reached at


This is My Take on Music Recording. I’m Doug Fearn. See you next time.