(To read this series from the beginning, start here.)
So far, this discussion of the recording chain has taken us up to the Microphone Preamp. The next device in the simplified recording chain is the Analogue-to-Digital Converter, often called the “A-to-D” or just the “Converter”. The job of this device is to interpret the electrical signal (analogue) which represents our recorded sound, and generate the equivalent digital representation for use within a computer.
Pay No Attention to the Man Behind the Curtain
It is useful, but not critical, to understand what is actually happening in this conversion.
An analogue sound signal is an electrical signal with continuously changing voltage that represents the changing nature of the sound that has been captured. Just as a sound in the air is carried through compressions and rarefactions (vibrations or “oscillations”) of particles in the air, an analogue sound signal carries a wave form of changing voltages representing the changing amplitude of these oscillations. This kind of signal is useful for triggering the physical movement of a speaker in order to project the captured sound, but does not equate to the language of computer data.
A digital (computer) representation of the same sound is an approximation of the continuous wave form, created by repeatedly sampling the changing voltages over tiny fractions of a second. By measuring the voltage in the analogue signal thousands of times each second, and by representing each sampled voltage using a huge scale of possible values, the computer can store a very close approximation to the original analogue sound wave.
Itty-Bitty Big Numbers
The Compact Disc standard of audio sampling – the first commercial digital audio standard – utilizes 16 “bits” of computer storage for each sample, and takes 44,100 voltage samples each second in order to create the wave form approximation. The generated wave form approximation is built using over 65,000 possible values for the analogue voltage at a given moment in time. This standard was designed based on properties of human hearing, and was agreed to produce a digital version of sound that most people could not distinguish from the analogue source, while requiring the smallest possible amount of computer data to represent audio.
Today, modern recording utilizes 24 “bits” of computer storage for each sample, and while many studios still operate at 44,100 samples per second, some recordists prefer to take more samples per second for a more precise approximation of the analogue sound, sometimes up to 192,000 times per second. Using 24 bits to represent a sample allows for a wave form approximation which allows over 16,000,000 possible values for the analogue voltage at a given moment in time. The use of 24 bits is preferred because it allows much greater signal-to-noise ratio, and allows for greater computer manipulation of the wave form using effects algorithms (in-computer calculations) without introducing significant unwanted distortion into the sound.
The Only Bit You Need to Know
Do you need to understand all of that? Of course not, but you will want to be operating at industry standards within your converter. Be sure that you are using a converter which operates at a bit depth of 24-bits, and at a sample rate of 44,100 (often notated as 44.1k) or higher.
Most A-D converters on the market today, as long as they are designed for studio recording, will do a more-than-adequate job of converting analogue signals to digital signals. There are more expensive converters which will perform at an even higher standard. The converters built into computers, tablets or phones are generally NOT adequate for studio purposes, so be sure you are not relying on the microphone jack in your laptop, tablet or phone for your critical recordings.
In the next post, we’ll look at the device used to actually store the captured sound.
Next Instalment: Part Six – Saving the Day with the DAW