we listen
vintage classroom
Dynamix Productions, Inc.
  • © 2003 - 2024 Dynamix Productions, Inc. Contact Us 0

A Sound Education

I, Robot

Pasted Graphic 1

There was a recent AES (Audio Engineering Society) presentation at McGill University in West Montreal, Quebec titled "We Are the Robots: Developing the Automatic Sound Engineer." Brecht De Man from the Centre for Digital Music, Queen Mary University of London discussed the state of automatic mixing. I don't know whether to be happy some automation is on the way, or be alarmed that I may become obsolete.

We have had some sort of automatic mixing since the 1970's, but no system is great. Automatic mixing is useful for live events, such as seminars and panel discussions. For amateur podcasters, it would be a helpful plug-in. For broadcasters, it would be good since stations are so understaffed these days. But will a simple one-button device or program be a long time coming?

If you look at the rapid development of software in the last 25 years, it doesn't seem too far fetched. In digital cameras, there are "scenes" programmed in that contain thousands of examples of lighting combinations. The CPU compares the matrix metering it is seeing with those examples and chooses the closest one. Voila, your picture is perfectly exposed. Well, in theory at least.

Google now has cars that drive themselves. The military has bombs that guide themselves to their target. I guess you could have an audio engineer plug-in that could give you that perfectly sterile mix.

We engineers have been using tricks for years to improve a mix, or in truth, make things easy. Thank God for automation. Before motorized faders and digital workstations, we had to mix and remix until we got it right. Of course mixing requires more than just moving faders. It involves controlling peaks, raising low levels, balancing frequencies, gating, dynamic control, etc. We use plug-ins for these. And many of these have hidden features that allow the signal from one source to automatically adjust the signal of another source when it gets in the way. So that's a form of robot control I suppose.

I don't think I'll be replaced in the near future, but it's possible. After all, who reading this thought it would be impossible to have a car that drives itself? I didn't, I've seen Knight Rider. What's next? Robot copywriters? Robot narrators? Robot actors? Maybe even robot clients?

Did You Know?


  • "I, Robot," was Isaac Asimov's groundbreaking short story series in the 1940's. The Three Laws of Robotics forever changed science fiction's view of the robot.
  • The 2004 movie "I, Robot" featured autonomous cars that drove themselves at higher speeds for safety, but could be operated manually.
  • Marco Beltrami composed the soundtrack for the movie "I, Robot" in only 17 days. The music featured 95 orchestral musicians, 25 choral performers, and 0 robots.
  • Modem manufacturer U.S. Robotics derived its name from the Isaac Asimov "I, Robot" short story series (United States Robots and Mechanical Men).
  • Alan Parsons Project's album "I Robot" was to originally follow Asimov's concepts. The comma after "I" was dropped to avoid copyright infringement.

Dynamix Tech Notes

Compression is a secret weapon for audio engineers. A compressor controls dynamic range. That is, it reduces loud signals or boosts low levels. When a signal is sent through a standard compressor, the overall level is reduced by a certain ratio, depending on how aggressive your adjustment is. This reduces the "dynamic range." To understand dynamic range, think of a Jacuzzi that is 3/4 full. When the jets are at full speed, the water splashes up and almost onto the floor. If you reduce the jets to half, the water only splashes up a little bit. Now you can put more water in, but the circulation is reduced.

There are many ways to use a compressor. One my favorites is the "sidechain" mode. Instead of reducing the dynamic range of a track the entire time, it is only reduced when another signal or track is competing for it. Disc jockeys call this "ducking." When they are speaking into the microphone, the music underneath is automatically ducked under using sidechain compression. I use sidechain compression on music and sound effects tracks when I'm mixing something that has a ton of tracks competing with each other. I usually make the voice track the "master" that keys the music and effects tracks. When the other tracks reach a certain level, if the voice is also in that range, they get reduced.

You can also use this technique for an interview. The host can act as the trigger for the other guests' microphones. When the host is talking and someone else speaks, they get lowered. I wish you could use this in real life.


Neil Kesterson

GSN-354889-D