Steve Deckert
|
I have always said that in the analogue format there is a thread connecting one note to the next. That is what makes it seem more real sounding. In the digital format this thread seems simply absent unless artificially installed. Without the thread, a part of the mind (at least mine anyway) wants to explode. The thread is for lack of a eloquent term -- noise, as Dave points out in his article.
That said, I have never heard live music sound like a digital recording of live music. True, there is always noise during live music - a thread if you will - but why is it not captured in digital? The very fact that we have to put stuff back in is evidence that stuff is missing. If digital was really as perfect as it's claimed, it wouldn't need so much fluffing right?
Before we shoot the messenger, I suspect much of what is missing is really a time alignment issue where the time alignment of all the microphones relative to each other is murdered during the recording process whenever there are more than two equidistantly-spaced microphones and a mixer.
Having done some home studio recording myself (520 sessions total over 10 years) , I discovered some interesting things about timing...
Consider that any time there is a live gig being recorded by multiple microphones there will be crosstalk. For example, the lead guitar microphone might be placed 11 feet away from the rhythm guitar microphone to keep the sound of each other as separate as possible. You can be sure that if the lead guitar stops playing and there are no gates on the track, you will be able to hear not only the sound of the rhythm guitar on the lead track but you will also hear everyone else, especially drums on that microphone which brings the question:
How do you get the sound from the rhythm guitar that was 11 feet away that leaks into the the lead microphone to be in phase when it's now coming from two places at the same time?
Compared to your head that was in one spot during the live recording, a recording is very different. A recording is like cloning yourself and putting your head in several spots at the same time and then overlapping the sound from each of you. There is virtually no way that wouldn't sound smeared and all spatial cues totally molested.
This is why each microphone for each track is set to try to capture only that instrument with very little sound from the room, or at least other instruments in different locations of the room. I emphasize 'try'. This is where panning and setting the levels for each track in a particular way becomes extra complicated relative to sound quality. Because of the distance between each of these microphones there is a given delay aka phase angle associated with it relative to the other tracks. Sometimes, in fact often perhaps without even knowing it, adjusting the pan and levels can put some of that partially out of phase content further out of phase to a point where it begins to cancel itself out.
When you are mixing you listen to the 2 channel send from the board, so by adjusting the level on say track 6 up a bit, you just changed the relationship between in phase and out of phase information on all the tracks causing some things in the mix to get louder and some to get softer purely based on phase angle.
In my experiments over the years, I have been able to take two track masters and rotate the phase angles of each track independently of each other and at specific bands of frequencies. Yes, this is why I don't have a social life. Anyway, what I found is that there are things buried in two track recordings that you can not hear until you do this -- and then suddenly they appear. Predominately this hidden information is ambience, presence and detail, but it can also be the difference from hearing a background singer that was 20 feet away and barely noticed in the music suddenly step out away from the group, move 10 feet closer to you and sing twice as clearly.
Basically I am saying as is Dave McNair, that the flaw is the recording process itself and subsequent engineering and electronics, not the media. I have a whole collection of 'master tapes and outtakes' that sound disappointing and I've been in enough live sessions to know that most likely those sessions definitely didn't sound like that live. The disappointing sound is always the same in both formats btw... a hardness to the sound and lack of dimensionality and texture.
In the end, the only way to really compare is live two track analogue vs live two track digital done with a single stereo pair of microphones straight into the recorder. In this experiment it is exponentially harder to hear the difference between the two formats. The tape usually wins because it is in and of itself a beautiful sounding mechanical / analogue (mechanalog) effect that can offset the dryness of the microphones and or preamps.
If we want music to sound natural all the time without a lot of work when it's recorded we should probably try to re-invent the microphone. A weighted organic biological diaphragm like the human ear would have the same compression and ability to fix things as does tape, but catch it at the source like the human ear does. From there we just have to focus on not screwing it up which is kind of a flip from todays mastering where the assumption is that it is screwed up and the focus is on how to fix it.
Steve
|