Limitations of home studio Analog-to-Digital converter
Professional music studios, when working with tapes, use 24 bit / 96 kHz precision. Later when they make compact disc, the standard is 16 bit / 44,1 kHz, or for DVD 16 bit / 48 kHz. That is considered good enough for human ear.
In home studio however, we won't use that super microscopic resolution, because first there's our equipment limitations. Our turntable's inbuilt microchip will convert analog signal to 16 bit / 44,1 kHz. It is obviously designed to suit transferring LPs to CDs. If we bypass this chip by sending signal through other cable directly to sound card, we meet similar limitations and other sound blaster quality issues. And finally Audacity, the software we record with, will manage only 16 bit unless specially compiled and upgraded to 24 bit.
Best practice due to this limitation, is we record at 16 bit depth. For sample rate 48 kHz, that is almost at maximum, is a good practical choice. Another good practice is we save raw data to loseless FLAC format for backup before making any editing. For mp3, our final, most accessible and portable format that will store our digitally remastered, filtered, equalized, cleared of noise and what not music, for our mp3 format therefore we decide to use 16 bit, 48 kHz, 320 bit rate. There, we've written these numbers down as a reminder of a sensible configuration.
Why is 16 bit good enough
Although home studio is not as advanced as professional one, there is a question whether there is any difference. Here is an interesting argument why 16 bit resolution is no different than 32 bit one when ripping vinyl records.
Bit depth being simply the accuracy of each dot (sample), that is how precise is the number that describes vertical position of a dot is, you can imagine 32 bit resolution as simply having more decimal places when describing location coordinates. In the picture that follows, when you estimate the vertical scale, you'll notice that difference between 16 and 32 bit is not noticeable, you'd have to zoom in the picture to individual pixel and even then there wouldn't be much difference whether a number is preciser at n-th decimal place.
In order to compare how such miniscule difference matters, if at all, the following experiment.was designed.
The picture shows recorded signal from a vinyl record. The same loud pop was recorded 6 times, 3 times with 16 bit and 3 times with quasi 32 bit precision. In this particular case, this is a scratch, or maybe it is soft piece of dust, but for this example it could also be drummer's hit. Nevertheless, needle hits this sharp obstacle with a bit different speed and angle, and it bounces. Given small initial conditions of speed and angle, gramophone needle bounces way differently. In first take it bounces a lot, second take it bounces a little, 4th take, it bounces medium, each time there's completely unpredictable path. Those 'tiny' differences that are seen between those recordings can also partly be caused by many factors (wow & flutter, air movement, internal or external rumble, etc.) as the needle pressure against vinyl isn't very much.
Therefore, as the differences between those recording are way larger than the difference between 16 and 32 bit recording, the argument states that in this case 16 bit depth is of equal quality to 32 bit.
No comments:
Post a Comment