Posted on May 15th, 2011 at 10:51 PM by aki

An article by Magnús Jensson and Áki Ásgeirsson.
Written in February 2011

A musical instrument has three different components: the controller, the sound generator and resonance. The controller component is the instruments user interface and is the area of human interaction. The sound generator is for example a vibrating string, an air column in wind instruments, the membrane of a drum or an electronic oscillator. Resonance is the amplification component, for example the sounding body of string instruments or a loudspeaker.

Note that sometimes the human player becomes himself a part of the instrument, either partially as in the case of brass instruments, where the lips vibrate to produce the sound, or completely as with singers, where the interface, sound production and resonance are all inside the human body. The role of external acoustics is of variable importance, but is in some cases essential for the instrument.

All three parts can be powered by the performer or external sources of energy.

Historically, these components are close together. Ancient instruments are usually self-contained; their interface, sound generators and amplifying bodies are closely connected in space and there is not much distinction between these parts. For example, a string instrument like the guitar or lute has an interface (the strings) which also generates the sound. The strings are connected directly to a resonating body which amplifies the sound. Drums, voices, ancient flutes and horns have the same clear connection of interface, sound generation and amplification.

With further technical improvements, the distinction between these parts has grown apart. One of the most advanced instruments of the 15th-18th centuries was the church organ. It’s components are quite separated, with a unified keyboard interface that is remotely connected to different sets of pipes, providing distinction between the performance and the audible result. One goal of the organ was to imitate, replace or accompany instruments of the ensemble, like was later the case with many synthesizers of the 80’s. Organs where externally powered since in the antiquity.

During the 19th century, many musical instruments went through a technical reconstruction. To meet the increasing size of concert halls, they were made more resonant and with a brighter sound spectrum that could cut through the growing orchestras. Also, 19th century harmony demanded chromatic instruments that could play evenly sounding scales in any key. Non-chromatic, soft, unevenly sounding instruments did not survive.

An instrument like the piano has, like the organ, a clear distinction between interface and sound generation. The keyboard is what concerns the performer, and the mechanics of the string-hitting hammers is not relevant to anyone enjoying the music, it is hidden from view; inside a black box. There is no more visual connection to the strings, only with the interface.

The 20th century electronic instruments continue to make further separations. The Theremin (1920) moves the performer away from the instrument, creating an invisible distance connection. The Ondes Martinot (1928) uses a familiar keyboard interface but adds resonating gongs (and string) to the amplification stage in the speaker cabinet.
The speaker is regarded as the body of the instrument. The speaker is not seen as a separated “neutral” unit. The electric guitar (1931) also uses an amplifier as an indispensable part of the instrument. The power and size of guitar amplifiers and amplification systems has grown considerably over the decades, opening up the possibility for very loud types of music styles performed for a large number of audience. The separation of the amplification component therefore became a prerequisite for the ritual of massive musical events.

Besides using external amplification, MIDI synthesizers usually separate the interface component from the sound generator. A synthesizer controller, usually a keyboard, sends note messages to the sound module which interprets them into musical sound. With MIDI, it is possible to send musical information over long distances in real-time or as a MIDI file to be played by different sound generators.

Since electronic music is mostly made with computers today, a great development has been on new interfaces and controllers. Distribution of music has also shifted from physical objects (vinyl, CD’s / concert amplification) to digital form (mp3, wav / headphones and computer speakers). This combined increases further abstraction of musical material in the creative process, as well as detaching the ‘end-user’ amplification stage. The world’s loudspeakers are now open for the creative musician. Maybe more importantly for the creative development, the world’s computational hardware is interconnected and ready to act as sound controllers and generators.