Sunday, March 15, 2015

The Brain Hears More than the Ear; Why Digital Audio Really Does Suck and You Should Not Listen to (or Produce) Factory-Made Music


Some of my friends lately have been recommending a lot of contemporary music to me that they’ll think I’ll like, and are astonished when I tell them it’s terrible.

“Why?” they ask. “Just because it’s digitally produced?”

Yes, precisely because of that.

“Aren’t you just arbitrarily assigning a value to analog vs. digital and deciding to like one and not the other? Doesn’t that go against your own object-oriented philosophy, you essentialist organicist hippie?”

It certainly would go against my philosophy if that were what I was doing, but it’s not. Actually, the only reason you think you like anything that’s digital audio is because you persuade yourself into some kind of hype based on the idea that since the musicians are playing well (assuming it is, for example, a drummer, not a drum machine of some kind) and since the music is “about” something you like (ecology, left social politics, etc.), then it must be good.

Well, that idea is wrong…not simply a different opinion, but wrong. As a musician and amateur sound-nerd, I thought I should write a post to explain why that is. When you’ve read it, I hope you will understand that digital audio is the biggest artistic cul-de-sac the human species has ever gotten itself stuck in, and how enormous a waste of time and resources it has been, and why people doing it should make no money at all, and that they should stop. Now. 

Let’s begin with some basics. The most important thing that no one realizes about sound is that music, as such, is a non-repetitive waveform. If you’re going to hear something your central nervous system can identify as music, it can’t repeat!

Yes, of course I know that when you look at a waveform diagram, you’ll get the idea that it’s repetitive, but it’s not. Music is not a repetitive waveform! Once the brain hears a repetitive waveform, like a doorbell or a drum machine pattern, it immediately categorizes that as not being music: it’s a mechanical pattern, and you are wired to recognize it. You can sit on a train and hear its rhythm: no one has to tell you that this rhythm is mechanical: you already know. When you listen to a dance record, for example, you immediately figure out what’s done by a machine and what isn’t. You will categorize a repetitive, mechanical pattern as non-musical, whether you want to or not.

“But Nick, why don’t all electronically produced sounds register as non-musical? If you’re against digital technology, why not just be an acoustic-Nazi and throw away your electric guitar, your amp and your effects pedals?”

I’m actually not against digital technology in general, and I’m not an organicist. I love amplified and effect-laden electric music (I build effects pedals both as a hobby and a job), and as a musician, it’s the instantaneous dynamic response of the amplifier or effect that I’m interested in. Everything a musician does changes in real time: how hard or how soft you play or sing, and all the subtleties and nuances, are constantly changing. It’s impossible for someone to analyze all that enough to technologically reproduce it, because they would have to know what you’re doing before you do it in order to even begin to decipher it, and you can’t do that.

Even a simple analog circuit, if well designed, becomes a very complex thing analytically. It really becomes like a living thing, so the actual sound itself begins to take on a human quality, because you are taking information from the musician’s technique or from the input, and that in itself controls and determines the output. It hasn’t had an algorithm or a set of parameters placed on it to predetermine the sound, and that’s where your dynamics and “touch sensitivity” come from, and those are the sounds people really like, even if they don’t know exactly why.

The reason is, once again, that these sounds are not repetitive, but constantly changing, and so they take on a lifelike, very human type of sound, even if it’s very distorted. Old Jimi Hendrix records were very human sounding, and even people who think they don’t like electric music will admit that.

Now we can get to why I’m against digital sound.

Number one, there are several major differences between analog and digital. The sampling rate is one. The actual sampling rate will never be high enough to capture the analog sound, period. No matter how you advance the technology, it will never work. Another major difference is that the resolution of a digital signal gets worse and worse as the sound gets softer (the “decay” at the end of the note, where it fades into quiet). It might start off at 24 bits, but you can’t use all 24 bits because then you get digital distortion. So you take it down to 20, and as the signal decays you might get down to 10, and it sounds crappy. This is why I hate digital echoes, delays and reverb. They are an absolute waste of money, time and materials. My tube amps can’t stand them either. The decay at the end of a digital reverb pedal going into a tube amp is horribly grating on the ears. 

Now, you can get good digital echo and/or reverb using Pro Tools or something similar, but that’s using a very fast processor, and it’s not in real time. When it comes to playing with digital equipment in real time, you simply can’t put enough processing into it for the money. The manufacturers would all like you to believe you can, but you can’t. The amount of processing you need to do the job properly is enormous. The difference with Pro Tools, etc., is that you’re not doing it in real time: you can put in a half-second buffer wherever you want, for example, because you’ve got this latency in the processing which makes these hard disk systems work. But when you’re playing live, you can’t play with 30 or 40 milliseconds of latency: it’s just not quick enough.

“But Nick, why don’t you also hate digital video?”

Because video is totally different. The reason digital works with video is because you’re surrounded by other information from the pixels around it. You can make an educated guess as to what the picture should be. You can’t do that with audio, because it’s changed. Again, music is non-repetitive!

An even worse problem is that the artifacts in digital sound, once it becomes quieter at a certain point in the decay, become very grainy, and this annoys your ears. People call it “coldness”, “hardness”, “brittle”, “glassy”, “listening fatigue”, and other tactile, haptic names. This is precisely because the brain is having to do too much processing. If you are listening to a telephone conversation (or even someone right next to you talking) in a dance club, and it’s very loud, and you have to keep straining to hear what they’re saying, after a couple of minutes you’re going to hang up and wait until you can hear one another before you start talking again. This is because the brain is using so much processing power to try and screen out the background noise to isolate the meaningful information. You don’t realize it’s happening, but it is, and it’s very tiring on your brain, and after a while it goes “Oh, fuck it!” It can’t be bothered to deal with it…and shouldn’t be.

I used to have a Fender Mustang digital modeling guitar amp. I think it had about 24 amp models and 30 effects on it, and you’d think a guitarist would want to play with that all day. Yeah? Try it some time! Your ears get tired, you feel like you’re getting a headache, and you’re just not enjoying playing, and you won’t even be able to practice a full hour. But plug into a cheap little 5W tube amp, and you’ll play it till your fingers fall off. This is because your brain is totally dissatisfied with the work it has to do in listening to the digital amp.

So this is not some socially inspired prejudice of mind. Not at all! Your actual brain is worn out by it.

Imagine walking in a forest. You are conditioned, through millions of years of evolution, to detect a stick breaking, in amongst all the other noises, as a danger signal. Say you’re washing the dishes in your kitchen and a piece of glass breaks behind you: you turn around. This is an inbuilt danger signal, and that’s what is happening, without you realizing it. When you hear sounds that are artificial, they stand out to you. When you hear a synthesizer rendition of an actual instrument trying to sound like something, your brain rejects it. Even if you say it’s real, your brain says “No it’s not.” I’m not saying it’s right; I’m saying you can’t fool it.

And I am by no means against synths either, but no matter how I want to like Björk’s new album (the music and what she’s singing about are great, or should be), I just can’t listen to it. When I listen to Stratosfear by Tangerine Dream, however, something very different happens. This is because in 1976, TD were using analog synths that were not trying to sound like something, but generating an original sound in their own right. The most important part of that is that the actual sound source was analog. If experimental electronica and prog-rock aren’t your thing, then think about Stevie Wonder’s Music of My Mind or Innervisions. Sounds great, right? Those types of analog sounds, even though they’re synthetic, sit in the mix. You can mix them in. When they first came out with digital synthesizers such as the DX7, on the other hand, you could identify it immediately, no matter what instrument they were set to emulate. You’d go, “That’s a DX7. That’s an M1.” Amazing but true, the brain can identify the algorithm the fucking thing was based on and categorize it, just like that! The brain has a huge processing ability, but it doesn’t need to work very hard to know what it likes.
           
The best sounds are the ones that sound “human” or “living”, and by those words I simply mean whatever you mean when you say you hear or feel those things. Reverb should sound like “space”; it should not sound like a glass marble breaking against a marble pillar and showering the marble floor with pixilated bits of microscopic glass in a room that’s pretending to be the size of the Sistine Chapel but you know (because you can hear it) that it’s about 4 cubic inches (roughly the size of your “reverb” pedal). Reflection of sound frequencies against each other is much like that against the walls and objects in a room. True reverb (even using metal plates or a spring unit) can make it sound louder even if it’s not. When you mix two signals together, you’ve got two mathematical variables together: one is the sum of the signal, the other is the difference. That’s a pretty big number of sonic variations…virtually infinite, in fact.

To fully enjoy such possibilities, you need a huge bandwidth, one you will never be able to get with digital tech. If you’re making an analog mixing console, the actual mixing amplifier needs to have a frequency response of up to 300 kilocycles, because it’s mixing together all the tracks, and it couldn’t mix them together properly unless it had that kind of bandwidth. You would lose all the “air” (another spatial, tactile phrase) and the definition between the signals.

“But Nick, everyone knows you can’t hear anything higher than 20kHz!”

You don’t know what you’re talking about. They say that, yeah, but what they’re talking about is a sine wave. Music is not a sine wave! Music is non-repetitive! What I’m talking about are the subtle variations in the music, and you can hear those at frequencies far higher and lower than you even know you can. And that’s an important point that doesn’t get looked at much: things happening in a sonic realm beyond the capabilities of our hearing can still affect the reproduction of the parts of the sound we do hear. I can’t say it enough: it’s amazing what you perceive. One of the most frequent comments that you get from engineers when they use digital on a microphone is that they lose what they call the “air” around a recording. You hear the actual sound, but you don’t hear the ambience (even in so called “ambient” music, too ironically), and one of the basic tricks of good engineering is capturing the ambience. Once you’ve lost detail in an audio signal, it’s gone forever. The detail is lost. And maintaining that definition is the whole point of electronic engineering. Digital kills the whole ballgame.

Since our brains have so much ability to process music, and yet are so easy to figure out in terms of what works for them, we should respect them and not feed them crap. All you experimental sound artists out there who are using digital, please do us all a favor and throw that rubbish away. Or sell it, if you must (I certainly can’t afford not to sell things I can’t use anymore) and invest the proceeds in stuff that makes real sound. No sound produced by digital processing is “music”. It’s noise. If that’s what you want to do, fine, but call it what it is, and also be aware that that noise is not good for the central nervous systems of yourself or the people you’re inflicting it on. If you want noise that’s actually musical, try listening to the Grateful Dead making “Feedback” in 1968, or the aforementioned Hendrix.

As musicians, we all got sucked in by the promise of awesome technology that turned out to be not that cool. We spent a lot of money, invested a lot of time and energy, and it can be hard to let go. It doesn’t get easier the longer you hold on. Let go NOW.