electronic music

electronic music
electronically produced sounds recorded on tape and arranged by the composer to form a musical composition.
[1930-35]

* * *

Any music involving electronic processing (e.g., recording and editing on tape) and whose reproduction involves the use of loudspeakers.

In the late 1940s, magnetic tape began to be used, especially in France, to modify natural sounds (playing them backward, at different speeds, etc.), creating the genre known as musique concrète. By the early 1950s, composers in Germany and the U.S. were employing assembled conglomerations of oscillators, filters, and other equipment to produce entirely new sounds. The development of voltage-controlled oscillators and filters led, in the 1950s, to the first synthesizers, which effectively standardized the assemblages and made them more flexible. No longer relying on tape editing, electronic music could now be created in real time. Since their advent in the late 1970s, personal computers have been used to control the synthesizers. Digital sampling
composing with music and sounds electronically extracted from other recordings
has largely replaced the use of oscillators as a sound source.

* * *

Introduction

      any music involving electronic processing, e.g., recording and editing on tape, and whose reproduction involves the use of loudspeakers.

      Although any music produced or modified by electrical, electromechanical, or electronic means can be called electronic music, it is more precise to say that for a piece of music to be electronic its composer must anticipate the electronic processing subsequently applied to his musical concept, so that the final product reflects in some way his interaction with the medium. This is no different from saying that a composer should have in mind an orchestra when he composes a symphony and a piano when he composes a piano sonata. A conventional piece of popular music does not become electronic music by being played on an electronically amplified guitar, nor does a Bach fugue become electronic music if played on an electronic organ instead of a pipe organ. Some experimental compositions, often containing chance elements and perhaps of indeterminate scoring, permit but do not necessarily demand electronic realization, but this is a specialized situation.

      Electronic music is produced from a wide variety of sound resources—from sounds picked up by microphones to those produced by electronic oscillators (generating basic acoustical wave forms such as sine waves, square waves, and sawtooth waves), complex computer installations, and microprocessors—that are recorded on tape and then edited into a permanent form. Generally, except for one type of performed music that has come to be called “live electronic music” (see below), electronic music is played back through loudspeakers either alone or in combination with ordinary musical instruments.

      This article covers both early experimentation with electronic sound-producing devices and composers' subsequent exploitation of electronic equipment as a technique of composition. Throughout the discussion it should be clear that electronic music is not a style but rather a technique yielding diverse results in the hands of different composers.

      Historically, electronic music is one aspect of the larger development of 20th-century music strongly characterized by a search for new technical resources and modes of expression. Before 1945 composers sought to liberate themselves from the main Classical–Romantic tradition of tonal thinking and to reconstruct their thinking along new lines, for the most part either Neoclassical or atonal and 12-tone, in which a composition is built up entirely from a tone row consisting of all 12 notes of the ordinary chromatic scale.

      This pre-World War II period was accompanied by substantial experimentation with electrical and electronic devices. The most important outcome for the composer was the development of a number of electronic musical instruments (such as the Hammond organ and the theremin) that provided new timbres and that laid the technical foundations for the future development of electronic music proper from about 1948 onward. The rapid development of computer technology has had its effect in music, too, so much so that the term computer music is replacing electronic music as the more accurate description of the most significant interaction between the composer and the electronic medium.

      Electronic music is represented not only by a wide variety of 20th-century works, and not only by serious concert pieces, but also by a substantial literature of theatre, film, and television scores and by multimedia works that use all types of audiovisual techniques. Electronic music for theatre and films seems an especially appropriate replacement for a disembodied, nonexistent orchestra heard from a tape or a sound track. Electronic popular music has also won adherents. This mostly has consisted of arrangements of standard popular music for electronic synthesizers, the tentative use of electronic alterations by some of the more ambitious and experimental rock groups, and the preparation of recordings by innovative studio techniques.

History and stylistic development

Beginnings
      During the 19th century attempts were made to produce and record sounds mechanically or electromechanically. For example, the German scientist Hermann von Helmholtz traced wave forms of regular sounds to check results of his acoustical researches. An important event was the invention of the phonograph by Thomas Edison and Emile Berliner, independently, in the 1870s and 1880s. This invention not only marked the beginning of the recording industry but also showed that all the acoustical content of musical sounds could be captured (in principle, if not in actuality at that time) and be faithfully retained for future use.

      The first major effort to generate musical sounds electrically was carried out over many years by an American, Thaddeus Cahill, who built a formidable assembly of rotary generators and telephone receivers to convert electrical signals into sound. Cahill called his remarkable invention the telharmonium, which he started to build about 1895 and continued to improve for years thereafter. The instrument failed because it was complex, impractical, and could not produce sounds of any magnitude since amplifiers and loudspeakers had not yet been invented. Nevertheless, Cahill's concepts were basically sound. He was a visionary who lived ahead of his time, and his instrument was the ancestor of present-day electronic music synthesizers.

      The Italian Futurist painter Luigi Russolo was another early exponent of synthesized music. As early as 1913 Russolo proposed that all music be destroyed and that new instruments reflecting current technology be built to perform a music expressive of industrialized society. Russolo subsequently did build a number of mechanically activated intonarumori (noise instruments) that grated, hissed, scratched, rumbled, and shrieked. Russolo's instruments and most of his music apparently vanished during World War II.

Impact of technological developments
      Between World War I and World War II developments occurred that led more directly to modern electronic music, although most of them were technically, rather than musically, important. First was the development of audio-frequency technology. By the early 1920s basic circuits for sine-, square-, and sawtooth-wave generators had been invented, as had amplifiers, filter circuits, and, most importantly, loudspeakers. (Sine waves are signals consisting of “pure tones”—i.e., without overtones; sawtooth waves comprise fundamental tones and all related overtones; square waves consist only of the odd-numbered partials, or component tones, of the natural harmonic series.) Also, mechanical acoustical recording was replaced by electrical recording in the late 1920s.

      Second was the development of electromechanical and electronic musical instruments designed to replace existing musical instruments—specifically, the invention of electronic organs (electronic organ). This was a remarkable achievement and one that absorbed the attention of many ingenious inventors and circuit designers. It should be stressed, however, that it was the objective of these organ builders to simulate and replace pipe organs and harmoniums, not to provide novel instruments that would stimulate the imaginations of avant-garde composers.

      Most electromechanical and electronic organs employ subtractive synthesis, as do pipe organs. Signals rich in harmonic partials (such as sawtooth waves) are selected by the performer at the keyboard and combined and shaped acoustically by filter circuits that simulate the formant, or resonant-frequency, spectra—i.e., the acoustical components—of conventional organ stops. The formant depends on the filter circuit and does not relate to the frequency of a tone being produced. A low tone shaped by a given formant (a given stop) is normally rich in harmonics, while a high tone normally is poor in them. Psychologically, one expects this from all musical instruments, not only organs but also orchestral instruments.

      Some electronic organs operate on the opposing principle of additive synthesis, whereby individually generated sine waves are added together in varying proportions to yield a complex wave form. The most successful of these is the Hammond organ, patented by Laurens Hammond in 1934. The Hammond organ has odd qualities because the richness of its harmonic content does not diminish as the player goes up the keyboard. The German composer Karlheinz Stockhausen (in Momente, 1961–62), the Norwegian composer Arne Nordheim (in Colorazione, 1968), and a few others have scored specifically for this instrument.

      Third was the development of novel electronic musical instruments designed to supply timbres not provided by ordinary musical instruments. During the 1920s there was a burst of interest in building an extraordinary variety of such instruments, ranging from practical to absurd. The most successful of these were relatively few in number, were monophonic (i.e., could play only one melodic line at a time), and survive chiefly because some important music has been scored for them. These are the theremin, invented in 1920 by a Russian scientist, Leon Theremin; the ondes martenot, first built in 1928 by a French musician and scientist, Maurice Martenot; and the trautonium, designed by a German, Friedrich Trautwein, in 1930.

      The theremin is a beat-frequency audio oscillator (sine-wave generator) that has two condensers placed not inside the circuit chassis but, rather, outside, as antennas. Because these antennas respond to the presence of nearby objects, the pitch and amplitude of the output signal of the theremin can be controlled by the manner in which a performer moves his hands in its vicinity. A skilled performer can produce all sorts of effects, including scales, glissandi, and flutters. A number of compositions have been written for this instrument since the 1920s.

      The Ondes Martenot consists of a touch-sensitive keyboard and a slide-wire glissando generator that are both controlled by the performer's right hand, as well as some stops controlled by the left hand. These, in turn, activate a sawtooth-wave generator that delivers a signal to one or more output transducers. The instrument has been used extensively by several French composers, including Olivier Messiaen and Pierre Boulez, and by the French-American composer Edgard Varèse.

      The trautonium, like the Ondes Martenot, uses a sawtooth-wave generator as its signal source and a keyboard of novel design that permits not just ordinary tuning but unusual scales as well. Most of the music composed for this instrument is of German origin, an example being the Concertino for Trautonium and Strings (1931) by Paul Hindemith. In about 1950 a polyphonic version (capable of playing several voices, or parts, simultaneously) of this instrument was built by Oskar Sala, a former student of Trautwein and Hindemith, for preparing sound tracks in a Berlin film studio. These instruments have become virtually obsolete, however, because all the sounds they produce can easily be duplicated by electronic music synthesizers.

Tape music (magnetic recording)
      With tape music the history of electronic music in the narrower sense begins. This history seems split into three main periods: an early (by now classical) period lasting from the commercial introduction of the tape recorder immediately following World War II until about 1960; a second period that featured the introduction of electronic music synthesizers and the acceptance of the electronic medium as a legitimate compositional activity; and the third period, in which computer technology is rapidly becoming both the dominant resource and the dominant concern.

      The invention of the tape recorder gave composers of the 1950s an exciting new musical instrument to use for new musical experiences. Fascination with the thing itself was the dominant motivation for composing electronic tape music. Musically, the 1950s, in contrast to the 1960s, were relatively introverted years: in all kinds of music, the focus of interest was technique and style, especially with the avant-garde. In time, the medium became fairly well understood, the techniques for handling it became increasingly standardized, and a repertory of characteristic and historically important compositions came into being. The burning issues were whether tape would replace live musicians; whether the composer was at last freed from the humiliations so often endured to get his music into the concert hall; and whether a new medium of expression had been created, quite different from and independent of instrumental music, analogous, say, to photography as opposed to traditional painting.

      It became increasingly evident, however, that there was no reason to think that the electronic tape medium would eliminate instrumental performance by live musicians. Tape was increasingly regarded as something that could be—but did not need to be—treated as a unique medium. Thus the notion that the tape recorder could function as one instrument in an ensemble grew more and more popular. This conception obviated the visual monotony of an evening in an auditorium with nothing to look at but a loudspeaker. To this has been added a further stage of evolution, namely, live electronic music, in which the tape recorder and its tape is eliminated or greatly restricted in function, and transformations of the sounds of musical instruments are effected at the concert with electronic equipment. Not infrequently, this kind of performance environment also involves scores in which aleatory (chance, or random), improvisatory, or quasi-improvisatory musical guidelines for the manipulation of such equipment are supplied by a composer who prefers to let what happens just happen. Actually, it is open to question whether live electronic music is really an advance or a reversion to a more primitive state of the art, in the sense that it is the enhancement of the timbres of familiar instruments, rather than music conceived totally in terms of electronic media per se.

Establishment of electronic studios
      The first period of development was certainly one into which Europeans put the most consistent work. Tape music quickly gained recognition and financial support, and, before long, a number of well-equipped electronic music studios were established, primarily in government-supported broadcast facilities. Some important work was also done in the United States, but this was much more fragmentary, and not until after 1958 did Americans begin to catch up, either technically or artistically.

      In 1948 two French composers, Pierre Schaeffer (Schaeffer, Pierre) and Pierre Henry, and their associates at Radiodiffusion et Télévision Française in Paris began to produce tape collages (analogous to collages in the visual arts), which they called musique concrète. All the materials they processed on tape were recorded sounds—sound effects, musical fragments, vocalizings, and other sounds and noises produced by man, his environment, and his artifacts. Such sounds were considered “concrete,” hence the term musique concrète. To this Paris group certainly belongs the credit both for originating the concept of tape music as such and for demonstrating how effective certain types of tape manipulation can be in transforming sounds. These transformations included speed alteration, variable speed control, playing tapes backward, and signal feedback loops. Schaeffer however, opposed the use of electronic oscillators as sound sources, claiming that these were not “concrete” sound sources, not “real,” and hence artificial and anti-humanistic.

      Two of the most successful and best known musique concrète compositions of this early period are Schaeffer and Henry's Symphonie pour un homme seul (1950; Symphony for One Man Only) and Henry's Orphée (1953), a ballet score written for the Belgian dancer Maurice Béjart. These and similar works created a sensation when first presented to the public. Symphonie pour un homme seul, a descriptive suite about man and his activities, is an extended composition in 11 movements. Orphée is concerned with the descent of Orpheus into Hades.

      The second event of significance was the formation of an electronic music studio in Cologne by Herbert Eimert, a composer working for Nordwestdeutscher Rundfunk (now Westdeutscher Rundfunk), who was advised in turn by Werner Meyer-Eppler, an acoustician from the University of Bonn. Eimert was soon joined by Karlheinz Stockhausen (Stockhausen, Karlheinz), who composed the first really important tape composition from this studio, the now-famous Gesang der Jünglinge (1956; Song of Youth). The Cologne studio soon became a focal point of the reemergence of Germany as a dominant force in new music.

      At Cologne emphasis was immediately placed on electronically generated sounds rather than concrete sounds and on electronic sound modifications such as filtering and modulating rather than tape manipulation. Eimert and Stockhausen also published a journal, Die Reihe (“The Row”), in which appeared articles emphasizing the “purity” of electronic sounds and the necessity of coupling electronic music to serial composing (using ordered groups of pitches, rhythms, and other musical elements as compositional bases), which made no more sense than the Paris group's insistence on using only nonelectronic, nonserial material. This activity was part of the campaign of the 1950s that brought about the collapse of Neoclassicism (a style that drew equally on 20th-century musical idioms and earlier, formal types); the emergence of the Austrian composer Anton von Webern as the father figure of the new music; the development of total serialism, pointillism (a style making use of individual tones placed in a very sparse texture), and intellectualism; and an emphasis on technique. The examples set by these two studios were soon widely imitated in Europe. This trend continued in the 1960s, with many more studios, from modest to elaborate, being set up in almost every major urban centre in Europe. As time passed, the techniques and equipment in the newer studios became more standardized and reliable, and the rather peculiar issue of concrete versus electronic sounds ceased to concern anyone.

      In the United States the production of electronic music, until 1958, was much more sporadic. The only continuing effort of this sort was the project undertaken by two composers at Columbia University, Otto Luening (Luening, Otto) and Vladimir Ussachevsky (Ussachevsky, Vladimir), to create a professional tape studio and to compose music illustrating the musical possibilities of the tape medium. Luening and Ussachevsky often collaborated on joint compositions. They gained particular attention for the composition of several concerto-like works for tape recorder and orchestra. In 1959 Luening and Ussachevsky joined with another U.S. composer, Milton Babbitt, to organize, on a much larger scale, the Columbia–Princeton Electronic Music Center, in which an impressive number of composers of professional repute have worked.

      Other tape compositions in the early 1950s in the United States were largely those of individual composers working as best they could under improvised circumstances. One major composer who did so was Varèse (Varèse, Edgard), who completed Déserts, for tape and instrumental ensemble, in 1954, and Poème électronique, for the Philips Pavilion at the 1958 Brussels World's Fair. Another was John Cage (Cage, John), who completed Williams Mix in 1952 and Fontana Mix in 1958. Both Varèse and Cage had anticipated the electronic medium; Cage's Imaginary Landscape No. 1 (1939) for RCA test records and percussion can well be regarded as a forerunner of current live electronic music.

      With the establishment of the Experimental Music Studio at the University of Illinois in 1958 by Lejaren Hiller and the University of Toronto studio in 1959 by Myron Schaeffer, the formation of facilities for both production and teaching began to move forward. The number of studios in university music departments grew rapidly, and they soon became established as essential in teaching as well as composing.

      The individual components may vary in a well-designed “classic” studio, but basically the equipment may be divided into five categories: sound sources (sine-wave, square-wave, sawtooth-wave, and white-noise generators; and microphones for picking up concrete sounds); routing and control circuitry (patch panels, switching boards, and mixers for coupling components together; amplifiers; and output connections); signal modifiers (modulators, frequency shifters, artificial reverberators, filters, variable-speed tape recorders, and time compression–expansion devices); monitors and quality-control equipment (frequency counter, spectrum analyzer, VU metres that monitor recording levels, oscilloscope, power amplifiers with loudspeakers and headsets, and workshop facilities); and recording and playback equipment, including high-quality tape recorders.

      With this equipment the composer records sounds, both electronic and microphoned; modifies them singly or in montages by operations such as modulation, reverberation, and filtering; and finally re-records them in increasingly complex patterns. Inevitably, a major part of the composer's effort is tape editing, unless he is satisfied with the crudest string of effects merely linked together in sequence. As in any other kind of music, the aesthetic merit of an electronic music composition seems to depend not only on musical ideas as such but also on the way in which they relate to one another and how they are used to build up a musical structure.

      The integration of the tape has become a rather popular form of chamber music, if not of symphonic music. Varèse's Déserts is an early example of this. It is scored for a group of 15 musicians and a two-channel tape and consists of four instrumental episodes interrupted by three tape interludes. In other works the tape recorder is “performed” together with the remaining instruments rather than merely in contrast to them. The problems of coordination, however, can become overriding, for it is difficult for a group of performers to follow a tape exactly. Obviously, the tape dominates the situation, remorselessly moving along no matter what happens in the rest of the group.

      Thousands of electronic tape compositions were in existence by the early 1970s, many of ephemeral interest. It is relatively rare for a composer to have established a reputation solely as a composer of tape music. Pierre Henry perhaps is an example, but in general the important names in instrumental music of the 1950s and 1960s are the significant contributors in electronic music, too.

      Stockhausen remained in the forefront of electronic music composers with several important pieces following Gesang der Jünglinge. These included Kontakte (1959–60; Contacts), for tape, piano, and percussion, and Telemusik (1966), for tape alone. Luciano Berio (Berio, Luciano) and Bruno Maderna, both Italians, worked for a while at the Radio Audizioni Italia (now Radiotelevisione Italiana) studio in Milan. Besides Différences (1958–60), a composition for tape and chamber group, Berio's tape pieces include Thema-Omaggio a Joyce (1958; Homage to Joyce) and Visage (1961), which exploited the unusual voice of the American singer Cathy Berberian.

      In the United States the Columbia–Princeton Electronic Music Center has had the greatest output, a long list of composers besides Luening and Ussachevsky having used its facilities. Tape music from the University of Illinois studio includes Salvatore Martirano's L's GA (1967), a savage political satire for tape, films, helium bomb, and gas-masked politico. The University of Toronto studio, in spite of its technical excellence, has not been well represented on discs. One Canadian piece that is very amusing, however, is Hugh LeCaine's Dripsody (1955), all the sounds of which are derived from the splash of a single drop of water.

Music synthesizers (music synthesizer)
      Composing tape music by the classic method was neither easy nor free of technical pitfalls. A complex piece had to be assembled from hundreds or even thousands of fragments of tape. Splicing these sounds together consumed a vast amount of time and could also lead to an accumulation of errors and deterioration of the sound. Consequently, substantial efforts were expended to reduce this work load and at the same time improve quality. Music synthesizers were the first product of these efforts. They cannot, however, be regarded as more than an intermediate technological development because of later computer technology (see below).

      In contrast to Cahill's period, by the 1950s the means finally existed to construct full-scale music synthesizers, starting with the RCA Electronic Music Synthesizers, designed by Harry Olson and Herbert Belar, research scientists working at the RCA Laboratories at Princeton, New Jersey. The first machine was introduced in 1955; a second, improved model was turned over to the Columbia–Princeton Electronic Music Center in 1959.

      The basic advance of the RCA (RCA Corporation) synthesizer was an information input mechanism, a device for punching sets of instructions into a wide roll of punched paper tape. The composer could at any time during the programming process interrupt this activity to listen to what had been punched, to make corrections, and to edit the material before making a final paper tape that then constituted the “master score” of the composition.

      The composer whose name became particularly associated with the RCA synthesizer was Milton Babbitt (Babbitt, Milton). He had developed a precisely defined compositional technique involving total serialization (i.e., of every musical element). When he became aware of the synthesizer, he was anxious to use it, because it gave him the opportunity to realize his music more precisely than had hitherto been the case. Among Babbitt's compositions created with this machine were Composition for Synthesizer (1961), Vision and Prayer (1961), Ensembles for Synthesizer (1963), Philomel (1964), and Phonemena (1974).

 In about 1960 a new circuit, the voltage-controlled oscillator (VCO), attracted the attention of engineers interested in electronic music because the frequency of its output signal is proportional to an independently generated input voltage rather than being internally set. The response is immediate because no mechanical couplings or controls are required. Robert Moog was the first to design several types of compact synthesizers of moderate price that supplied an extended range of possibilities for sound manipulation. In addition to VCO's, which produce sine, square, sawtooth, and triangular waves, the Moog synthesizer (see photograph—>) contained white-noise generators, attack and decay generators (controlling a sound's onset and fading), voltage-controlled amplifiers, and band-pass filters and sequencers.

      One major advance in sound manipulation provided by VCO's is frequency modulation; if the input is a periodic function, the output frequency will vary periodically to provide tremolos, trills, and warble tones. Moog's synthesizer soon had to compete with several other synthesizers of essentially the same design, the Buchla Electronic Music Box, the ARP, and the later, more sophisticated Prophet 10.

      These popular synthesizers eliminate much of the drudgery of tape splicing, but at a price. The range of timbres and processes is more limited because they operate by subtractive synthesis and impose transients that affect all partials (component vibrations) of a complex wave identically. An advantage of a harmonic tone generator built in 1962 by James Beauchamp at the University of Illinois, also from VCO's, was that it used additive synthesis—i.e., it created sound by combining signals for pure tones (sine waves)—instead of removing partials from a complex signal. It was designed so that each partial of a sound could have its own entry point, its own rise time, and its own decay time. The improvement in tone quality was enormous, because the ear normally expects nuances such as higher partials that decay faster than lower ones. Salvatore Martirano's Underworld (1965) is a good example of music in which the tape was made largely by additive synthesis.

      A composer closely associated with synthesizers is Morton Subotnik, who has produced a series of extended electronic music compositions, starting with Silver Apples of the Moon (1967). These pieces were created on the Buchla synthesizer, and any one of them demonstrates in relatively unmodified form the types of sounds one may obtain with these instruments.

      A word should be said about realizations of instrumental music through synthesizers, notably an early, commercially successful album called Switched-on Bach (1968), arrangements made by Walter (later Wendy) Carlos on a Moog synthesizer. The record displayed technical excellence in the sounds created and made the electronic synthesis of music more intelligible to the general listening public. This is useful so long as it is realized that the materials on the record are arrangements of familiar music, not original compositions. (Carlos later created an original electronic score for the science fiction film Tron.)

computer music
      Perhaps the most important development in electronic music is the use of digital computers. The kinds of computers employed range from large mainframe, general-purpose machines to special-purpose digital circuits expressly designed for musical uses. Musical applications of digital computers can be grouped into five basic categories: data processing and information retrieval, including library applications and abstracting; processing of music notation and music printing; acoustical, theoretical, and musicological research; music composition; and sound synthesis. In all these fields considerable research and experimentation is being carried out, with sound synthesis perhaps being the most widespread and advanced activity. Dramatic illustrations of the growth of this work include the appearance of the periodical Computer Music Journal, the formation of the Computer Music Association, made up of hundreds of members, and the holding each year of the International Computer Music Conference. The 1982 conference dominated the Venice Biennale—one of the major festivals of contemporary music.

Computer composition
      Composition (musical composition) and sound synthesis are complementary processes because the first may lead to the second. A composer may elect to use a set of compositional programs to produce a composition. He may then stop using a computer and print his results for transcription to instrumental performance. Alternatively, he may transfer his results directly into electronic sounds by means of a second set of programs for sound synthesis. Finally, he may desire only to convert an already composed score into sound. When he does this, he translates his score into a form that can be entered into a computer and uses the computer essentially as a data translator.

      The first point to understand about computer composition is that, like electronic music, it is not a style but a technique. In principle, any kind of music, from traditional to completely novel, can be written by these machines. For a composer, however, the main appeal consists not in duplicating known styles of music, but, rather, in seeking new modes of musical expression that are uniquely the result of interaction between man and this new type of instrument.

      At present, composers above all need a compiling language comprised of musical or quasi-musical statements and a comprehensive library of basic compositional operations written as closed subroutines—in effect, a user's system analogous to computer languages (such as Fortran) used by mathematicians. Two major obstacles stand in the way of building up an effective musical computer language. The first is the obvious one of allocation of sufficient time, money, and other resources. The second is defining what goes into the subroutine library; i.e., of stating with precision the smallest units of activity or decision making that enter into the process of musical composition. Unlike mathematics, in which traditional modes of thinking prepared the way for such a definition of subroutines, in music the defining of “modules” of composition leaves even sophisticated thinkers much more at sea.

      The earliest example of computer-composed music is the Illiac Suite for String Quartet (1957) by two Americans, the composer Lejaren Hiller and the mathematician Leonard Isaacson. It was a set of four experiments in which the computer was programmed to generate random integers representing various musical elements, such as pitches, rhythms, and dynamics, which were subsequently screened through programmed rules of composition.

      Two very different compositions, ST/10-1,080262 (1962), by Yannis Xenakis (Xenakis, Iannis), and HPSCHD (1968), by John Cage and Hiller, are illustrative of two later approaches to computer composition. ST/10-1,080262 is one of a number of works realized by Xenakis from a Fortran program he wrote in 1961 for an IBM 7090 computer. Several years earlier, Xenakis had composed a work called Achorripsis by employing statistical calculations and a Poisson distribution to assign pitches, durations, and playing instructions to the various instruments in his score. He redid the work with the computer, retitled it, and at the same time produced a number of other, similar compositions. HPSCHD, by contrast, is a multimedia work of indeterminate length scored for one to seven harpsichords and one to 51 tape recorders. For HPSCHD the composers wrote three sets of computer programs. The first, for the harpsichord solos, solved Mozart's Musical Dice Game (K. 294d), an early chance composition (aleatory music) in which successive bars of the music are selected by rolling dice, and modified it with other compositions chosen with a program based on the Chinese oracle I Ching (Book of Changes). The second set of programs generated the 51 sound tracks on tape. These contained monophonic lines in microtone tunings based upon speculations by the composers regarding Mozart's melodic writing. The third program generated sheets of instructions to the purchasers of a record of the composition.

      Hiller has continued to develop compositional programming techniques in order to complete a two-hour cycle of works entitled Algorithms I, Algorithms II, and Algorithms III. Otherwise, interest in computer composition gradually has continued to grow. For example, Gottfried Michael Koenig, director of the Instituut voor Sonologie of the University of Utrecht in The Netherlands, has after a lapse of several years written new computer music such as Segmente 99-105 (1982) for violin and piano. Related to Koenig's work is an extensive literature on theoretical models for music composition developed by the American composer Otto Laske. Charles Ames, another American, has written several works for piano or small ensemble that are less statistical and more deterministic in approach than most of the above. Clarence Barlow has written a prize-winning composition, Çoğluatobüsíşletmesí (1978), that exists in two versions—for piano or for solo tape. A different, but nevertheless important, example of computer music composition is Larry Austin's Phantasmagoria: Fantasies on Ives' Universe Symphony (1977). This is a realization, heavily dependent on computer processing, of Charles Ives's last and most ambitious major composition, which he left in a diverse assortment of some 45 sketch pages and fragments.

      The borderline between composition and sound synthesis is becoming increasingly blurred as sound synthesis becomes more sophisticated and as composers begin to experiment with compositional structures that are less related to traditional musical syntax. An example of this is Androgeny, written for tape in 1978 by the Canadian composer Barry Truax.

Computer sound synthesis
      The production of electronic sounds by digital techniques is rapidly replacing the use of oscillators, synthesizers, and other audio components (now commonly called analogue hardware) that have been the standard resources of the composer of electronic music. Not only is digital circuitry and digital programming much more versatile and accurate, but it is also much cheaper. The advantages of digital processing are manifest even to the commercial recording industry, where digital recording is replacing long-established audio technology.

      The three basic techniques for producing sounds with a computer are sign-bit extraction, digital-to-analogue conversion, and the use of hybrid digital–analogue systems. Of these, however, only the second process is of more than historical interest. Sign-bit extraction was occasionally used for compositions of serious musical intent—for example, in Computer Cantata (1963), by Hiller and Robert Baker, and in Sonoriferous Loops (1965), by Herbert Brün. Some interest persists in building hybrid digitalanalogue facilities, perhaps because some types of signal processing, such as reverberation and filtering, are time-consuming even in the fastest of computers.

      Digital-to-analogue conversion has become the standard technique for computer sound synthesis. This process was originally developed in the United States by Max Mathews and his colleagues at Bell Telephone Laboratories in the early 1960s. The best known version of the programming that activated the process was called Music 5.

      Digital-to-analogue conversion (and the reverse process, analogue-to-digital conversion, which is used to put sounds into a computer rather than getting them out) depends on the sampling theorem. This states that a wave form should be sampled at a rate twice the bandwidth of the system if the samples are to be free of quantizing noise (a high-pitched whine to the ear). Because the auditory bandwidth is 20–20,000 hertz (Hz), this specifies a sampling rate of 40,000 samples per second though, practically, 30,000 is sufficient, because tape recorders seldom record anything significant above 15,000 Hz. Also, instantaneous amplitudes must be specified to at least 12 bits so that the jumps from one amplitude to the next are low enough for the signal-to-noise ratio to exceed commercial standards (55 to 70 decibels).

      Music 5 was more than simply a software system, because it embodied an “orchestration” program that simulated many of the processes employed in the classical electronic music studio. It specified unit generators for the standard wave forms, adders, modulators, filters, reverberators, and so on. It was sufficiently generalized that a user could freely define his own generators. Music 5 became the software prototype for installations the world over.

      One of the best of these was designed by Barry Vercoe at the Massachusetts Institute of Technology during the 1970s. This program, called Music 11, runs on a PDP-11 computer and is a tightly designed system that incorporates many new features, including graphic score input and output. Vercoe's instructional program has trained virtually a whole generation of young composers in computer sound manipulation. Another important advance, discovered by John Chowning of Stanford University in 1973, was the use of digital FM (frequency modulation) as a source of musical timbre. The use of graphical input and output, even of musical notation, has been considerably developed, notably by Mathews at Bell Telephone Laboratories, by Leland Smith at Stanford University, and by William Buxton at the University of Toronto.

      There are also other approaches to digital sound manipulation. For example, there is a growing interest in analogue-to-digital conversion as a compositional tool. This technique allows concrete and recorded sounds to be subjected to digital processing, and this, of course, includes the human voice. Charles Dodge, a composer at Brooklyn College, has composed a number of scores that incorporate vocal sounds, including Cascando (1978), based on the radio play of Samuel Beckett, and Any Resemblance Is Purely Coincidental (1980), for computer-altered voice and tape. The classic musique concrète studio founded by Pierre Schaeffer has become a digital installation, under François Bayle. Its main emphasis is still on the manipulation of concrete sounds. Mention also should be made of an entirely different model for sound synthesis first investigated in 1971 by Hiller and Pierre Ruiz; they programmed differential equations that define vibrating objects such as strings, plates, membranes, and tubes. This technique, though forbidding mathematically and time-consuming in the computer, nevertheless is potentially attractive because it depends neither upon concepts reminiscent of analogue hardware nor upon acoustical research data.

      Another important development is the production of specialized digital machines for use in live performance. All such instruments depend on newer types of microprocessors and often on some specialized circuitry. Because these instruments require real-time computation and conversion, however, they are restricted in versatility and variety of timbres. Without question, though, these instruments will be rapidly improved because there is a commercial market for them, including popular music and music education, that far exceeds the small world of avant-garde composers.

      Some of these performance instruments are specialized in design to meet the needs of a particular composer—an example being Salvatore Martirano's Sal-Mar Construction (1970). Most of them, however, are intended to replace analogue synthesizers and therefore are equipped with conventional keyboards. One of the earliest of such instruments was the “Egg” synthesizer built by Michael Manthey at the University of Århus in Denmark. The Synclavier later was put on the market as a commercially produced instrument that uses digital hardware and logic. It represents for the 1980s the digital equivalent of the Moog synthesizer of the 1960s.

      The most advanced digital sound synthesis, however, is still done in large institutional installations. Most of these are in U.S. universities, but European facilities are being built in increasing numbers. The Instituut voor Sonologie in Utrecht and LIMB (Laboratorio Permanente per l'Informatica Musicale) at the University of Padua in Italy resemble U.S. facilities because of their academic affiliation. Rather different, however, is IRCAM (Institut de Recherche et de Coordination Acoustique/Musique), part of the Centre Georges Pompidou in Paris. IRCAM, headed by Pierre Boulez, is an elaborate facility for research in and the performance of music. Increasingly, attention there has been given to all aspects of computer processing of music, including composition, sound analysis and synthesis, graphics, and the design of new electronic instruments for performance and pedagogy. It is a spectacular demonstration that electronic and computer music has come of age and has entered the mainstream of music history.

      In conclusion, science has brought about a tremendous expansion of musical resources by making available to the composer a spectrum of sounds ranging from pure tones at one extreme to random noise at the other. It has made possible the rhythmic organization of music to a degree of subtlety and complexity hitherto unattainable. It has brought about the acceptance of the definition of music as “organized sound.” It has permitted the composer, if he chooses, to have complete control over his own work. It permits him, if he desires, to eliminate the performer as an intermediary between himself and his audience. It has placed the critic in a problematic situation, because his analysis of what he hears must frequently be carried out solely by ear, unaided by any written score.

Lejaren Hiller

Additional Reading
The history of electronic music through the 1950s is covered in Abraham A. Moles, Les Musiques expérimentales (1960); and Fred K. Prieberg, Musica ex Machina: Über das Verhältnis von Musik und Technik (1960). Later books with an emphasis on history or the music itself include Herbert Russcol, The Liberation of Sound: An Introduction to Electronic Music (1972); Elliott Schwartz, Electronic Music: A Listener's Guide, rev. ed. (1975); Jon H. Appleton and Ronald C. Perera (eds.), The Development and Practice of Electronic Music (1975); and David Ernst, The Evolution of Electronic Music (1977). Many how-to books and manuals have been published, most of them emphasizing the use of synthesizers. Typical examples include Gilbert Trythall, Principles and Practices of Electronic Music (1973); and Allen Strange, Electronic Music: Systems, Techniques and Controls, 2nd ed. (1983). A French publication, Michel Chion and Guy Reibel, Les Musiques électroacoustiques (1976), is valuable because its emphasis is on European rather than American practice. Books that deal primarily with relevant aesthetic problems include John Cage, Silence (1961, reissued 1973); Luigi Russolo, The Art of Noises (1967; originally published in Italian, 1913); Karlheinz Stockhausen, Texte zur elektronischen und instrumentalen Musik, 2 vol. (1963–64); and Iannis Xenakis, Formalized Music (1971). Electronic musical instruments and their components, many of which were at one time used for electronic music, are discussed in Richard H. Dorf, Electronic Musical Instruments, 3rd ed. (1968); Alan L. Douglas, The Electronic Musical Instrument Manual: A Guide to Theory and Design, 6th ed. (1976), and The Electrical Production of Music (1957); and Werner Meyer-Eppler, Elektrische Klangerzeugung (1949). Discussions of computer music may be found in Herbert Brün, Über Musik und zum Computer (1971); Heinz Von Foerster and James W. Beauchamp (eds.), Music by Computers (1969); Lejaren A. Hiller, Informationstheorie und Computermusik (1964), and, with L.M. Isaacson, Experimental Music (1959, reprinted 1979); Harry B. Lincoln (ed.), The Computer and Music (1970); Max V. Mathews et al., The Technology of Computer Music (1969, reissued 1974); Hubert S. Howe, Electronic Music Synthesis (1975); Christopher P. Morgan (ed.), The Byte Book of Computer Music (1979); Wayne Bateman, Introduction to Computer Music (1978, reissued 1980); and Hal Chamberlin, Musical Applications of Microprocessors (1980). For current articles see Source, Perspectives of New Music, and Journal of Music Theory (all semiannual); Audio Engineering Society Journal (monthly); and Computer Music Journal and Interface (both quarterly). For recordings of electronic and computer music, the reader is referred to Hugh Davies (comp.), International Electronic Music Catalog (1968); Schwann-1: Record and Tape Guide and Schwann-2 supplements; and Ernst (cited above). Sandra L. Tjepkema, A Bibliography of Computer Music (1981), is a valuable reference tool.

* * *


Universalium. 2010.

Игры ⚽ Нужен реферат?

Look at other dictionaries:

  • electronic music — n. music in which the sounds are originated, organized, or altered by electronic devices …   English World dictionary

  • Electronic music — For electronic musical instruments, see Electronic musical instrument. For other uses, see Electronic music (disambiguation). See also: List of electronic music genres and List of electronic music festivals Electronic music is music that… …   Wikipedia

  • Electronic music — Elektronische Musik bezeichnet Musik, die durch elektronische Klangerzeuger hergestellt und mit Hilfe von Lautsprechern wiedergegeben wird. Im deutschen Sprachgebrauch war es bis zum Ende der 40er Jahre üblich, alle Instrumente, an deren… …   Deutsch Wikipedia

  • Electronic music (disambiguation) — Electronic music is music made with electronics.Electronic music may also refer to:*Electronic dance music, electronic dance and pop music such as drum and bass, trance and techno *Electronica, Electronic fusion genre popular music, usually… …   Wikipedia

  • Electronic Music Midwest — (EMM) is a festival of new electroacoustic music. EMM is the result of a consortium formed in 2002 between Kansas City Kansas Community College (KCKCC), Lewis University, and the University of Missouri–Kansas City. Officially formed in 2002 [… …   Wikipedia

  • Electronic Music Foundation — (EMF) is a not for profit 501(c)(3) organization that produces events, publishes and disseminates media and information, and provides access to materials relevant to the history and creative potential of electronic music.The organization was… …   Wikipedia

  • Electronic Music Systems — may refer to one of the following. *Electronic music systems. *Novation Electronic Music Systems …   Wikipedia

  • Electronic Music Studios — (EMS) ist ein eingetragenes Warenzeichen des ersten englischen Synthesizer Studios. Gegründet wurde das Studio von Peter Zinovieff. 1969 präsentierte die Firma mit dem Synthesizer VCS 3 den ersten preisgünstigen und zugleich portablen Analog… …   Deutsch Wikipedia

  • Electronic Music Studios (London) Ltd — Electronic Music Studios (London) Ltd. (usually abbreviated to EMS) is a synthesizer company formed in 1969 by Dr. Peter Zinovieff. The company created the VCS 3 the same year. The synthesizer was developed in the basement of Zinovieff s London… …   Wikipedia

  • Electronic Music Laboratories — Electronic Music Laboratories, commonly abbreviated to EML, was an audio synthesizer company. Founded in 1968 in Vernon, Connecticut by four former engineers, the company manufactured and designed a variety of synthesizers sharing the same basic… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”