we listen
vintage classroom
Dynamix Productions, Inc.
  • © 2003 - 2021 Dynamix Productions, Inc. Contact Us 0

A Sound Education

2-Bits, 4-Bits, 6-Bits...

Low Bit Rate Sucks

"People don't appreciate music any more. They don't adore it. They don't buy vinyl and just love it. They love their laptops like their best friend, but they don't love a record for its sound quality and its artwork."

Laura Marling, musician

2-Bits, 4-Bits, 6-Bits...

We love convenience. Drive thrus, same-day delivery, automatic transmissions, instant coffee. Uh, maybe not that last one. Convenience often drives technology. And when it does, something has to go. What are you willing to give up for convenience? Taste, comfort, money, quality?

Convenience also influences new audio technology, and the result is portability, because we are a society on the go. So what did we give up to take Elvis along for the ride? In the early days of records, players got smaller and smaller so they could be moved from room-to-room, house-to-house, and even house-to-car. As the players got smaller, so did the sound. In the 1950's, engineers threw away the large vacuum tubes (and the warm sound) in radios for the minuscule transistor. Now you could hold Elvis in your hand. In the 60's, a wonderful little pocket-sized storage unit called the cassette tape came along that allowed you to take 2 or 3 records' worth of Elvis with you - but not the big sound.

And then came the iPod. Apple wasn't the first portable digital file player, but they made it a household name. Small device, small earbuds, small audio files - what's not to love? I admit that as a fan of convenience, I'm a huge fan of the iPod. The mp3 had been around for a while when the iPod came to town. This unique way of compressing large audio files down to smaller ones was created to speed up file transfers (remember, we were still using pokey dial-up modems at the time). So we sacrificed audio quality for speed.

At least Apple tried to address the loss-of-quality issue by authoring their own codec (code-decode algorithm). The AAC (Advanced Audio Coding) codec offers much better audio quality with even smaller file size, plus it does so much more than the mp3. If it's superior to an mp3, why isn't it more popular? Because Apple wants to sell Apple products. They usually keep a tight control on their technology, but have relaxed a little on AAC. You'll find it on Youtube, Nintendos, Playstations, Wiis, and many smartphones and car stereos. But it still isn't as popular as the venerable mp3. Sorta sounds like the old VHS - Betamax war doesn't it?

The reason for all the audio codec wars is to save time and space. Not something Arthur C. Clark would lay out in a textbook, but something of convenience - faster downloads and more tunes in your pocket. At ground zero in this war is the bit.

You generally win and get higher fidelity with more bits in a digital media signal. But convenience wins when you have fewer bits. Fewer bits, less time and space. The digital audio CD spits out 1.4 million bits per second of data (1,411 kbit/s). The highest quality mp3 produces 320 kbit/s - or 23% of what a CD does. Is the sound quality 23% less? It all depends upon your perception.

Mp3, AAC, Dolby AC3, and all the rest use perceptual coding technologies. Basically, what's really important gets less compression, and what isn't gets hit heavily or thrown out all together. Think of it as a stage play with real props on stage and a painted scenery backdrop. We trick the mind into thinking something faked is real. In audio codecs such as an mp3, the parts of the sound that take up the most file space (like the bass), are highly compressed. When playing them back, those parts are faked, just like that backdrop. A long time ago, someone in a computer lab decided how much bass you won't really hear.

When you stick that audio CD into your computer to make an mp3, you must make a few decisions that will affect the quality of your future entertainment. Do you want small size, or big sound? Choosing a small bit rate (like 64k, 96k, or even 128k) will reduce the file size considerably, but throw out a lot of those important stage props. Detail is lost. When it's played back, it may sound watery, jingly, or muffled - not quite the real thing. It's kind of like a sloppy paint-by-numbers scenery backdrop. But if you use a higher bit rate like 320 kbit/s, more detail is preserved. Better yet, use a modern codec like Apple's AAC to preserve even more.

Of course bit rate isn't the only deciding factor in audio quality, but it's the biggest. Consider this. A full-fledged cinematic motion picture is recorded and mixed at 96KHz, 24-bit, 7.1 surround - 18.4 million bits per second. An mp3 on your iPod is probably recorded at 128 thousand bits per second. That's less than seven-tenths of a percent of that movie sound. That's like Weird Al Yankovich vs. The Avengers. Bits will be flyin'!


Did You Know?

The mp3 format is, unfortunately, a standard file format to send audio over the internet. Even with blazingly fast internet connections, many radio broadcast facilities still prefer commercials and programs to be sent in the mp3 format. Once these files are downloaded, they are often ingested into the station's audio file server, recompressing them into a new compressed format. This original audio file has been compressed twice at this point.

If the radio station's transmitter is at a remote location, the main audio signal is often digitally compressed over a transmission line from the studios to the transmitter site. The original audio has now been compressed three times.

If the radio station is transmitting a digital signal such as HD Radio or satellite, the original audio has now been compressed four times. If the original audio program or commercial contained any material that was in mp3 format, such as the voice-over or music, it has now been compressed five times.

This is a lot like playing "telephone" in grade school - but in different languages for each person. Each interpretation and retelling is dependent on who is hearing an retelling the story. A lot can be misinterpreted.

Tech Notes

  • The mp3 codec is formally called MPEG-2, Layer III (1995). It was first introduced in 1993 as MPEG-1, Layer III.
  • The mp3 format was developed by the Fraunhofer Institute in Hanover, Germany. It is actually a brand and requires a paid license to include it in software or devices.
  • The mp3, AAC, and AC3 use "lossy" compression, meaning audio information is "lost" when encoded.
  • There are "lossless" codecs that successfully reduce file size but retain 100% of the audio information. Some of these codecs are Apple Lossless (ALAC), FLAC, ATRAC, HD-AAC, and WMA Lossless.
  • Compression codecs take advantage of "perceptual" coding, first discovered in 1894 by American physicist Alfred M. Mayer . He discovered that a tone could be rendered inaudible by another tone of lower frequency.
  • Small file size is the "pro" of an mp3. Decoding is the "con." It takes a lot more processing power to decode and play an mp3 than playing the original uncompressed audio format.
  • Susan Vega's "Tom's Diner" was chosen as a benchmark during the development of the mp3. It is considered the "Mother of the mp3."
Neil Kesterson

Recording History

Pasted Graphic

“There is a time for many words, and there is also a time for sleep.”
Homer, The Odyssey

Recording History


If you've read The Odyssey or The Iliad, then you know why they've been literary classics for almost 3,000 years. But did you know they date to the earliest origins of the alphabet? It's believed that Homer's poems and speeches were so revered that early scribes dedicated themselves to writing them down. In fact, half of all Greek papyrus discoveries contain Homer's works. Homer must have been one cool dude to influence all of Western literature.

Now put on your time-travel caps and flash forward to the 1933. It's the early years of recording audio. Huddie Ledbetter was incarcerated in a prison in Louisiana when a father and son recording team came by. John and Alan Lomax were traveling the south on the dime of the Library of Congress to record and document African-American folk and blues musicians. John had found the LOC collections woefully inadequate and got funding to buy a "portable" (315 lbs.) disc recorder. There in the Angola Prison was the singer and 12-string guitar player better known as "Led Belly." Their collaboration over the next several years cemented Led Belly's place as a folk and blues legend.

Humans have long been documenting events with paintings on cave walls; sculptures; writing on papyrus; photographs; records and tapes; and film and video. One way we're doing it today is with oral histories. "Oral history" is a term used to describe recording someone relating personal experiences on audio or video. It's often used to supplement documents, pictures, artifacts, and visuals about an event or time period. Scholars say oral histories are a unique part of understanding history. Just hearing speech inflections and emotion in one's voice speaks volumes that printed text cannot.

Searching for key words in printed or digitized text is easy. Searching recordings can be extremely time consuming. In the past, recordings were done on disc or tape, so one had to listen to the whole recording. If they were lucky, a transcript existed. Recordings were usually only transcribed If funds, personnel, and time were available. But most recordings are in boxes collecting dust.

These days, oral histories are recorded digitally, an obvious quality advantage over analog. But the not-so-obvious advantage is search-ability. Doug Boyd of the Louis B. Nunn Center for Oral History at the University of Kentucky has developed a novel searchable method called OHMS (Oral History Metadata Synchronizer). Any content in the Nunn Center's online files can be searched using current speech recognition algorithms. It's not perfect, as Boyd points out, but it's a step in the right direction.

Anybody that's used Siri or Google voice search will understand that speech recognition is not perfect. OCR (Optical Character Recognition) was at this stage about a decade ago. Now, document scanners can scan, ingest, and convert paper text to a digital file in seconds with few errors. But the complexity of speech patterns, accents, and recording quality will demand more intricate software solutions. This will come, and oral history repositories will reap the benefits.

All this bleeding-edge technology like the alphabet and records lead me to wonder what the next thing is? Thought recording? Memory mining? Oh boy, now everyone will know I really wasn't that cool in high school. I was really a nerd. Oh wait, you already figured that out.

Did You Know?


Four generations of the Lomax family have contributed immensely to American music through recordings, archives, productions, management, and journalism.
  • John Lomax grew up and Texas in the late 1800's and was influenced by cowboy folklore and songs.
  • Some of John's professions were as an English professor, college administrator and banker.
  • John co-founded the Texas Folklore Society, a chapter of the American Folklore Society.
  • During his travels in the south recording folksongs during the 1930's, the entire Lomax family was heavily involved in the recording and research.
  • Alan Lomax, John's son, continued his father's legacy of archiving folksongs. He was also a ethnomusicologist, writer, and filmmaker.
  • Grandson John Lomax III is a music journalist and artist manager, having represented Townes Van Zandt and Steve Earle.
  • Great-grandson John Nova Lomax is a music journalist and author.

Tech Notes


  • John Lomax's first field recordings were on wax cylinders. Fidelity was inferior to disc recordings, but disc recorders were not yet portable. Instead of a microphone, players performed into a bell or horn.
  • John and Alan Lomax used some of the earliest disc recorders in the field. These were uncoated aluminum, in which the heavy vibrating needle would etch the surface. The discs were robust, but the grooves were shallow, and thus noisy.
  • Alan started using lacquer-coated aluminum discs in the mid-30's. Fidelity was better, but the recording process was difficult. Spirals of shed lacquer and aluminum had to be continually brushed and blown away from the needle.
  • Like the immediate feedback that digital cameras give us today, Lomax could immediately play the record back to the musician.
  • These early disc recorders were so heavy that recordists often installed them in the back of old ambulances. They required alternating current (AC), so Lomax often used his car battery in conjunction with a portable transformer to power the recorder.

Neil Kesterson

Star Wars With One Major Piece Missing

This is a great example of how important sound is in film.

Circuit-Bending

Pasted Graphic

"I don't appreciate avant-garde, electronic music. It makes me feel quite ill."

Ravi Shankar

When you think of electronic music, you often think of the straightforward synthesizer, electric piano, or loops and samples. But some musicians like to rewire, alter, or downright reconstruct electronic equipment to make sounds they weren’t originally intended to do. At the forefront of these experimentations was BBC’s Radiophonic Workshop, a special music lab that gave us unique sounds and music for hit TV shows such as Dr. Who.

Read More...

A Sound Education

Pasted Graphic
“Education is the kindling of a flame, not the filling of a vessel.”
Socrates

What young person really knows what they want to be when they grow up? Very few of my childhood friends are still on the path they laid out early in life. Most of us have zig-zagged through careers, including me. Unlike today, if you wanted to be an audio engineer in the 70's like I did, there were very limited educational opportunities. Most recording engineers started as musicians or disc jockeys and fell into the job. As a teenager in the late 70's, I was into music more than anything. I hung out in radio and TV stations and got my first exposure to a "real" recording studio in a friend's basement. I was a child of tape. In fact, as a child I ran around my house with a cassette recorder taping anything that I found interesting. I would often shove a microphone into the face of a shy family member, who would naturally be at a loss for words. But when a teen nears graduation, the pressure builds into making that big life decision - "what will I grow up and be?"


I had many interests, but I gravitated towards the music option because music was fun and I was a decent trombone player. My grandmother, however, was blunt about my choice, "You'll never make it in music!" One day during this teen-fueled, soul-searching time, I saw an ad in a magazine touting a new recording school in Chillicothe, Ohio called The Recording Workshop (RECW). It was near us, so Mom and Dad took me up to see it. I was in heaven. I distinctly remember the crystal clear sounds being pumped from the speakers in the carefully soundproofed studio. RECW was the first of its kind, a concentrated curriculum aimed at the art of recording. Several options were available, but the core schedule was only six weeks long and cost about as much as a few semesters at a college. Money was always tight at my house, so my parents and I decided that a college education would be a better choice.

So, with student loans and a scholarship in hand, off to college I went. I majored in music, going to three different schools. After a while I realized that my grandmother might have been right. I wasn't a virtuoso, trombone gigs were rare, and I didn't want to teach music. So I seized the first job opportunity to work in a recording studio. I was mentored "on-the-job," as many of us were then. I just forged ahead, soaking up all I could by asking questions, reading books, and making a lot of mistakes. Do that year after year after year, and the schools that didn't understand our craft back then start asking you to teach their students. I've been very fortunate to share my knowledge with many young people that are eager to learn the art of recording.

Recording was never part of any college program when I started college. RECW was the only option, other than broadcast and engineering schools that focused more on broadcasting, announcing, and electronic training. It wasn't until the mid-1980s that some music schools started to add programs in music management, with some training in recording arts. Slowly, schools started to view our industry less as "button pushers" and more as a discipline. This is when the giant in the audio education world started to stomp on the naysayers. Full Sail University brought credibility to recording arts by offering 2-year programs, accreditation, and certificates. They pushed standardized testing, much like the medical and law community demand. Their graduates began working in the best studios, usually running from the get-go. They now offer 48 programs, including recording, video, media, entertainment, and marketing. Degrees range from certificates, associates, bachelors, and masters. And get this - right now, there are almost 16,000 students. Many graduates are at the top of their field and are highly respected.

Full Sail and RECW aren't the only schools out there now. I'm grateful that young people have so many options when it comes to audio education. Many also supplement audio with video, game design, web design, and announcing. Plus, many schools almost demand that students learn the basics of business. The well-rounded recording engineer is not a fantasy anymore, it's a reality.

What did all those years in music school teach me about recording? I learned how to listen and differentiate tones from each other. I concentrated on music theory and arranging, so I learned where to place tones against each other. I learned how to perform in front of people and how to continue playing when you make a mistake. I learned the discipline of practice, practice, practice, and that perfect practice makes perfect. I learned how to conduct and direct. I learned how to follow. I learned how to cooperate and blend with other players. I also learned that notes on a page are just lines on a map. It's the musician that turns the notes into music.

Did You Know?

  • There are nearly 200 schools for recording in 34 states in the U.S. Most are accredited programs offering certification or bachelor degrees
  • The Recording Workshop has had students from more than 70 countries attend its programs.
  • Many recording engineers use their studio experience as a springboard to the more lucrative position as record producer.
  • Some engineer/producers of note include: Phil Ramone (Paul Simon, Billy Joel, Ray Charles); Roger Nichols (Steely Dan); Al Schmitt (Jefferson Airplane, Jackson Browne, Neil Young); and Eddie Kramer (Jimmy Hendrix, Kiss, Carly Simon)
  • The Audio Engineering Society (AES) has a strong audio education foundation that provides learning, networking, and other opportunities for students around the world.
  • Sound mixing and engineering awards are presented by all the major entertainment organizations, including the Academy Awards (Oscars), Grammy Awards, and Emmy Awards.

Tech Notes

There is no certification required to work as an audio engineer, although certain training often helps in securing a new position. For example, one can be "Pro Tools certified" by taking a class or workshop that teaches the basic use of this most common audio software (Avid).

Some audio engineering job opportunities, like those in broadcasting, government or other industries with standards and accountability, require testing. These tests are generally technical in nature, but may include situational problems to solve.

Some organizations and societies, such as the Society of Broadcast Engineers (SBE) offer certification for its members to facilitate their job search. Most of these are technical certifications.

Audio engineer can mean many different things. It may include recording, radio deejaying, live sound, and even broadcast transmitter design. Many unofficial labels for specialists have been coined in recent decades to better describe their function.
  • Sound Designer usually describes creating soundscapes for film, video, or stage performances
  • FOH ("front of house") Engineer is a live sound engineer responsible for the overall amplification to the audience of a live event.
  • Monitor (or Foldback) Engineer is a live sound engineer responsible for what the stage performers and musicians hear.
  • Broadcast Engineer is responsible for anything from circuit design to complete radio/TV facility design.
  • Acoustic Engineer troubleshoots and designs spaces for recording, broadcasting, or performing. Many are also architectural acoustic engineers that help design public, commercial and living spaces.
  • Electroacoustic Engineers design microphones, loudspeakers, headphones, and mobile technology.


Neil Kesterson

3D Sound on the Right Track

Pasted Graphic

“Even if you're on the right track, you'll get run over if you just sit there.”

Will Rogers


3D Audio on the Right Track


It's said that when an early motion picture was first shown to the public, women fainted and men ducked from an approaching train. The director made a bold new decision that would alter the course of filmmaking for the next century. Instead of just placing the camera in front of all the action like an audience watching a stage, the director moved the camera to a new position - within the action - to create perspective. That’s been happening in filmmaking ever since. But the same has been happening in sound as well. And now with emerging technologies, virtual 3D sound is now here.

Read More...

Get in the Groove!

Pasted Graphic

The new generation is discovering what the old generation stopped loving - LPs. LP sales are the highs they’ve been in 22 years. Records aren’t just for hipsters anymore, everyone, including the older generation that gave them up, are groovin’ to them.

Read More...

The Color of Sound

Sound Color
“Within You Without You,” The Beatles
1967

The Color of Sound


How would you describe a sound to someone without using descriptors that are unique to sound, like: loud, bassy, shrill, whining, atonal, or noisy?

Not a problem, because we most often describe a sonic experience with words related to our other senses: sharp, warm, angular, raspy, piercing, even, warbling, soft, smooth, or flat.

What about blue? I think of that as more of a style of music or mood instead of a type of sound. Why don't we use more colors to describe what we hear? Probably because a "yellow" sound could be cowardly. A "green" sound may be eco-friendly. A "purple" sound is probably regal. A "brown" sound - well, we'll leave that one alone.

What if we could see sound? Aside from graphical representations of sound like waveforms and meters, we can't just look at an orchestra and see sounds flying out of the trombones. I wish we could watch the beautiful tones flow from Itzhak Perlman's Stradivarius.

But we can - sort of. As reported by NPR, we can see certain sounds using a technique invented in the mid-19th century. Click on the link above to read about and watch a short video describing this process to get a clearer picture. To simplify, scientists watch the disturbance of heat waves by sound. Ever look down a highway on a hot summer day and see the heat creating wavy images? Scientists have used this phenomenon to "see" sneezes and aircraft wing turbulence. But Michael Hargather at New Mexico Tech uses it to study explosives.

So what's next? I would love be able to put on some goggles and see sounds and where they're coming from. Loud sounds would be bright. Bass would be blue, treble would be white, and green, red, and yellow would fill in the gaps. Imagine seeing green waves and ripples emanating from the violas, bubbles of blue from the tuba, and distinct columns of yellow and white from the violins. It would be like Peter Max was the conductor. With technology advancing at such a rapid rate, this may not be so far-fetched in our lifetimes. Color me crazy.

Did You Know?

  • "White noise" in sound engineering describes randomly generating all the sounds in the frequency spectrum. SInce the sounds aren't generated at the same time, they are measured over a period of time. Each sound is at a consistent level.
  • 600px-White_noise_spectrum.svg
  • White noise sounds similar to a radio that is tuned to no station.
  • White noise is often used in large offices to mask sounds from workers, computers, and other office machinery. People also use white noise generators to aid in sleeping.
  • "Pink noise" is similar to white noise, but decreases in intensity each ascending octave.

  • 600px-Pink_noise_spectrum.svg


  • Pink noise is primarily used to measure the output of an audio device.
  • Sound engineers play pink noise over monitor systems to check frequency response and level of speakers. If measurements show that a speaker produces some frequencies differently than the pink noise (more bass for example), then it is considered to have a "colored" response. Pro audio speaker manufacturers strive for a "flat" response from their products. This way an engineer isn't fooled into compensating for the difference while mixing.
  • Live sound engineers use pink noise to reduce feedback and get maximum performance from speakers.
  • Other types of noise used in analysis are violet, brown(ian), gray, blue. Other informal names for sound used in measurement are red, green, black, noisy black, and noisy white.


Tech Notes

Reducing feedback in a live sound situation is very tricky, especially if good sound performance is desired. The "squeal" you hear when a microphone is turned on is from a buildup of a certain frequency. It's usually the point at which the microphone and speaker are the most efficient. If one points a microphone at the same speaker that is amplifying it, then serious feedback occurs. Most speakers are placed in front of or beside performers so there is no direct bleed back into the microphone. If speakers are placed behind the performers (think The Who), then eliminating feedback is a bigger chore.

How do you eliminate feedback? Let's use the simplest set-up as an example: one microphone and one speaker. A graphic equalizer (GEQ), a device that increases or reduces frequency by octaves, is inserted after the microphone channel and just before the amplifier. The engineer slowly raises the amplifier level until the first inkling of feedback. Using the GEQ, the engineer locates the offending octave, ex: 630 Hz, by actually increasing that frequency creating more feedback. That octave is then reduced until feedback goes away.

Next, then amplifier is turned up a little more until the next inkling of feedback occurs, usually at another frequency, which is then reduced. These steps are repeated over and over until the amplifier is at a suitable level without feedback. Of course computer technology has simplified this process greatly with devices that rapidly reduce feedback "on-the-fly." And with software, engineers for permanent PA systems in large venues can even predict where feedback will occur before installation. They can then program in filtering or make changes to the architecture, equipment, or speaker placement.

Neil Kesterson

Leftover Beethoven

WIth the recent news that the Library of Congress is inducting 25 entries into the Library of Congress National Recording Registry, I was excited to see U2, Linda Ronstadt, and Isaac Hayes get their due. Perusing the list, I saw a very influential (at least personally) album - Copland Conducts Copland: Appalachian Spring (1974).

I was a music major in college and always found Aaron Copland to be the quintessential American composer. He seemed to capture what Americans idolize about America: hope, boldness, charm, intrepidness, looking forward but not forgetting the past. Read More...

The BIrth of Recording

When Recording Writes the Music

When commercial radio really took off in the 1920's and 30's, it was fueled by advances in recording. You could even say that each drove the other. Early music recordings were mostly documents of what was already being played to live audiences - classical, early jazz, folk, etc. As bands got bigger and louder, the music got more exciting. Dixieland was new, records were all the rage, and radio was just beginning to transport the new sounds across the country, just like the transcontinental railway brought the ideas of the gilded age to America a half-century earlier. Read More...
GSN-354889-D