motion-picture technology

motion-picture technology

Introduction

      the means for the production and showing of motion pictures. It includes not only the motion-picture camera and projector but also such technologies as those involved in recording sound, in editing both picture and sound, in creating special effects, and in producing animation.

      Motion-picture technology is a curious blend of the old and the new. In one piece of equipment state-of-the-art digital electronics may be working in tandem with a mechanical system invented in 1895. Furthermore, the technology of motion pictures is based not only on the prior invention of still photography but also on a combination of several more or less independent technologies; that is, camera and projector design, film manufacture and processing, sound recording and reproduction, and lighting and light measurement.

History
      Motion-picture photography (cinematography) is based on the phenomenon that the human brain will perceive an illusion of continuous movement from a succession of still images exposed at a rate above 15 frames per second. Although posed sequential pictures had been taken as early as 1860, successive photography of actual movement was not achieved until 1877, when Eadweard Muybridge (Muybridge, Eadweard) used 12 equally spaced cameras to demonstrate that at some time all four hooves of a galloping horse left the ground at once. In 1877–78 an associate of Muybridge devised a system of magnetic releases to trigger an expanded battery of 24 cameras.

 The Muybridge pictures were widely published in still form. They were also made up as strips for the popular parlour toy the zoetrope “wheel of life,” a rotating drum that induced an illusion of movement from drawn or painted pictures (see Figure 1—>). Meanwhile, Émile Reynaud in France was projecting sequences of drawn pictures onto a screen using his Praxinoscope, in which revolving mirrors and an oil-lamp “magic lantern” were applied to a zoetrope-like drum, and by 1880 Muybridge was similarly projecting enlarged, illuminated views of his motion photographs using the Zoöpraxiscope, an adaptation of the zoetrope.

      Although a contemporary observer of Muybridge's demonstration claimed to have seen “living, moving animals,” such devices lacked several essentials of true motion pictures. The first was a mechanism to enable sequence photographs to be taken within a single camera at regular, rapid intervals, and the second was a medium capable of storing images for more than the second or so of movement possible from drums, wheels, or disks.

      A motion-picture camera must be able to advance the medium rapidly enough to permit at least 16 separate exposures per second as well as bring each frame to a full stop to record a sharp image. The principal technology that creates this intermittent movement is the Geneva watch movement (Geneva mechanism), in which a four-slotted star wheel, or “Maltese cross,” converts the tension of the mainspring to the ticking of toothed gears. In 1882 Étienne-Jules Marey (Marey, Étienne-Jules) employed a similar “clockwork train” intermittent movement in a photographic “gun” used to “shoot” birds in flight. Twelve shots per second could be recorded onto a circular glass plate. Marey subsequently increased the frame rate, although for no more than about 30 images, and employed strips of sensitized paper (1887) and paper-backed celluloid (1889) instead of the fragile, bulky glass. The transparent material trade-named celluloid was first manufactured commercially in 1872. It was derived from collodion, that is, nitrocellulose (gun cotton) dissolved in alcohol and dried. John Carbutt manufactured the first commercially successful celluloid photographic film in 1888, but it was too stiff for convenient use. By 1889 the George Eastman (Eastman, George) company had developed a roll film of celluloid coated with photographic emulsion for use in its Kodak (Eastman Kodak Company) still camera. This sturdy, flexible medium could transport a rapid succession of numerous images and was eventually adapted for motion pictures.

      Thomas Edison (Edison, Thomas Alva) is often credited with the invention of the motion picture in 1889. The claim is disputable, however, specifically because Edison's motion-picture operations were entrusted to an assistant, W.K.L. Dickson, and generally because there are several plausible pre-Edison claimants in England and France. Indeed, a U.S. Supreme Court decision of 1902 concluded that Edison had not invented the motion picture but had only combined the discoveries of others. His systems are important, nevertheless, because they prevailed commercially. The heart of Edison's patent claim was the intermittent movement provided by a Maltese cross synchronized with a shutter. The October 1892 version of Edison's Kinetograph camera employed the format essentially still in use today. The film, made by Eastman according to Edison's specifications, was 35 millimetres (mm) in width. Two rows of sprocket holes, each with four holes per frame, ran the length of the film and were used to advance it. The image was 1 inch wide by 3/4 inch high.

      At first Edison's motion pictures were not projected. One viewer at a time could watch a film by looking through the eyepiece of a peep-show cabinet known as the Kinetoscope. This device was mechanically derived from the zoetrope in that the film was advanced by continuous movement, and action was “stopped” by a very brief exposure. In the zoetrope, a slit opposite the picture produced a stroboscopic effect; in the Kinetoscope the film traveled at the rate of 40 frames per second, and a slit in a 10-inch-diameter rotating shutter wheel afforded an exposure of 6,000 second. Illumination was provided by an electric bulb positioned directly beneath the film. The film ran over spools. Its ends were spliced together to form a continuous loop, which was initially 25 to 30 feet long but later was lengthened to almost 50 feet. A direct-current motor powered by an Edison storage battery moved the film at a uniform rate.

      The Kinetoscope launched the motion-picture industry, but its technical limitations made it unsuitable for projection. Films may run continuously when a great deal of light is not crucial, but a bright, enlarged picture requires that each frame be arrested and exposed intermittently as in the camera. The adaptation of the camera mechanism to projection seems obvious in retrospect but was frustrated in the United States by Dickson's establishment of a frame rate well above that necessary for the perception of continuous motion.

      After the Kinetoscope was introduced in Paris, Auguste and Louis Lumière produced a combination camera/projector, first demonstrated publicly in 1895 and called the Cinématographe. The device used a triangular “eccentric” (intermittent) movement connected to a claw to engage the sprocket holes. As the film was stationary in the aperture for two-thirds of each cycle, the speed of 16 frames per second allowed an exposure of 1/25 second. At this slower rate audiences could actually see the shutter blade crossing the screen, producing a “flicker” that had been absent from Edison's pictures. On the other hand, the hand-cranked cinématographe weighed less than 20 pounds (Edison's camera weighed 100 times as much). The Lumière units could therefore travel the world to shoot and screen their footage. The first American projectors (projector) employing intermittent movement were devised by Thomas Armat in 1895 with a Pitman arm or “beater” movement taken from a French camera of 1893. The following year Armat agreed to allow Edison to produce the projectors in quantity and to market them as Edison Vitascopes (Vitascope). In 1897 Armat patented the first projector with four-slot star and cam (as in the Edison camera).

 One limitation of early motion-picture filming was the tearing of sprocket holes. The eventual solution to this problem was the addition to the film path of a slack-forming loop that restrained the inertia of the take-up reel. When this so-called Latham loop was applied to cameras and projectors with intermittent movement, the growth and shrinkage of the loops on either side of the shutter adjusted for the disparity between the stop-and-go motion at the aperture and the continuous movement of the reels (reel) (see Figure 6—>).

      When the art of projection was established, the importance of a bright screen picture was appreciated. Illumination was provided by carbon arc lamps, although flasks of ether and sticks of unslaked calcium (“limelight”) were used for brief runs.

Introduction of sound
      The popularity of the motion picture inspired many inventors to seek a method of reproducing accompanying sound. Two processes were involved: recording (sound recording) and reproducing. Further, the sound reproduction had to be presented in an auditorium and had to be quite good. This could not be achieved without a good amplifier of electrical signals. In 1907 Lee De Forest (De Forest, Lee) invented the Audion, a three-element vacuum tube, which provided the basis in the early 1920s for a feasible amplifier that produced an undistorted sound of sufficient loudness.

      Next came the problem of synchronization of the sound with the picture. A major difficulty turned out to be the securing of constant speed in both the recorder and reproducer. Many ingenious ideas were tried. In 1918 in Germany, the use of a modulated glow lamp in photographically recording sound and a photocell for reproduction were studied. In Denmark in 1923, an oscillograph light modulator and selenium-cell reproducer were developed. De Forest tried a gas-filled glow discharge operated by a telephone transmitter to record a synchronized sound track on the film. For loudspeakers he experimented with a variety of devices but finally chose the speaker with horn. The operating signal was obtained from a light shining through the film sound track and detected by a light-sensitive device (exposure meter) (photocell (photoelectric cell)). These were used in a system called Phonofilm, which was tried experimentally in a number of theatres. In 1927 the Fox Film Corporation (Twentieth Century-Fox Film Corporation) utilized some of these principles in the showing of Fox Movietone News.

      Meanwhile, the Western Electric Company (Western Electric Company Inc.) laboratories in the United States had been making extensive studies on the nature of speech and other sounds and on techniques for recording and reproducing such sounds. They experimented with recording on a phonograph disc and developed a 16-inch (40.6-centimetre) disc rotated at 33 1/3 revolutions per minute; they improved loudspeakers, introduced the moving-coil type of speaker, and generally improved the entire electronic amplification system. The Warner Bros (Warner Brothers). movie studio became interested in all these developments and formed the Vitaphone Corporation to market the complete system.

      Warner Bros. premiered Vitaphone in 1926 with a program featuring short musical performances and a full-length picture, Don Juan, which had synchronized music and effects but no speech. In 1927 it brought out The Jazz Singer, which was essentially a silent picture with Vitaphone score and sporadic episodes of synchronized singing and speech. Warners presented the first “100-percent talkie,” The Lights of New York, in 1928.

      Although the Vitaphone system offered fidelity superior to sound-on-film systems at this stage, it became clear that recording on film would be much more convenient. Among other disadvantages, it was extremely difficult with the wax discs to shoot outdoors or to edit sound. By 1931 Warner Bros. ceased production of sound-on-disc and adopted the sound-on-film option preferred by the other studios.

      Sound-on-film, a system that in various guises had enjoyed several periods of popularity, underwent constant improvements in the 1910s and 1920s. Although a sound track on the picture negative was used for Movietone News, Fox's dramatic productions used a separate sound film on fine-grain print stock that could be edited apart from the picture yet in synchronism with it. One serious problem of sound-on-film systems had been the distortion of the signal introduced by the glow lamp when recording the sound track on film. The Western Electric Company (Western Electric Company Inc.) devised a “double-string” light valve. A wire was looped around a post and parallel to itself. When speech current was applied to the wire in a magnetic field, the wire vibrated toward and away from itself according to the applied electrical waveform. A steady beam of white light shining through the loop was modulated in intensity by the varying gap between the wires; the modulated beam was photographed while masked by a slit perpendicular to the edge of the film. The resulting sound track appeared as darker or fainter parallel lines on the edge of the film. Known as the variable density system, this method of optically recording sound was originally used by all but one of the major Hollywood studios.

      The Radio-Keith-Orpheum Corporation (RKO Radio Pictures, Inc.) (RKO) was created in 1928 to showcase the Radio Corporation of America (RCA) Photophone system of variable area recording. With this system, the sound recording was modulated by a rotating mirror and the slit was parallel to the edge of the film; reproduction employed the perpendicular slit of the variable density sound track. Minor problems of incompatibility between recording and reproduction were solved in late 1928 when the track was narrowed down to stay safely within the area scanned by the beam. Identical side-by-side tracks were employed to compensate for lateral misalignment. Initially inferior in quality, the variable area system gradually drew even with the quality of the density system and supplanted it altogether in the 1950s.

      Whereas there was wide variation in the speed at which silent films were photographed and projected, sound necessitated standardization of the frame rate. In 1927 the speed was standardized at 24 frames per second, or 90 feet per minute for 35-mm film.

      The development of sound technology in the first years of talking pictures focused on two areas. One involved the development of blimped cameras, directional microphones, microphone booms, and quieter lights, so that sound could be recorded more cleanly at the time of shooting. The other technologies involved the ability to add, edit, and mix sound separately from the time the picture was recorded.

Pierre Mertz Elisabeth Weis Stephen G. Handzo

Introduction of colour
      From their earliest days, silent films could be coloured using nonphotographic methods. One means was to hand-colour frames individually. Another method made it possible to use monochrome sections for mood (e.g., blue for night scenes or red for passionate sequences). Monochrome stock was created by “tinting” the film base or “toning” the emulsion (by bathing the film in chemical salts).

      The photography of colour was theorized decades before it was developed for motion pictures. In 1855 the British physicist James Clerk Maxwell (Maxwell, James Clerk) argued that a full-colour photographic record of a scene could be made by filming three separate black-and-white negatives (negative) through filters coloured, respectively, red, green, and blue, the three primary colours. When converted to positives, the transparent exposed areas of the three films could pass light through the appropriate filter to produce three images, one red, one green, and one blue. Superimposing the three images would “rebuild” the image in its original colours.

      In 1868 Louis Ducos du Hauron (Ducos du Hauron, Louis) identified the additive and subtractive systems of colour. Both systems originate as red, green, and blue negative records. The difference occurs in the positive image, which may be composited from either the additive or subtractive primaries. The subtractive primaries—cyan, magenta, and yellow—are the complements of the additive primaries and can be obtained by subtracting, respectively, red, green, and blue from white. (Subtracting all three additive primaries yields black; adding all three yields white.)

      In motion-picture prints, overlapping dye layers (dye-transfer process) in the three subtractive primaries are simultaneously present on a clear, transparent base, and the image is projected with an exposure of white light. The dark areas of the cyan layer subtract all red colour, permitting only cyan (the mixture of blue and green) to pass through; the transparent areas pass all the white light. The magenta and yellow layers act similarly, and the original colour image is reproduced. The fineness of resolution is limited only by the structure of photographic grain or dye globules.

      The first film colour systems were additive, but they were confronted by insurmountable limitations. In an additive system, the three colour records remain discrete and meet only as light rays on the screen. The best picture results when a separate film is made for each colour; however, each colour can occupy alternating frames or small, alternating portions of each frame of a single film. (A contemporary example of additive colour can be seen in projection television, in which red, green, and blue lenses converge to produce an image so enlarged that the separate colour areas, or dots, become discernible.)

      The best known of the early additive processes was Kinemacolor (1906), which, for manageability, reduced the three colour records to two: red-orange and blue-green. A single black-and-white film was photographed and projected at 32 frames per second (twice the normal silent speed) through a rotating colour filter. The two colour records occupied alternate frames and were integrated by the retention characteristic of the human eye. As there were no separate red-orange and blue-green records for each image, displacement from frame to frame was visible during rapid movement, so that a horse might appear to have two tails. Inventors tried to increase the film speed, reduce the frame size, or combine two films with mirrored prisms (prism), but additive systems continued to be plagued by excessive film consumption, poor resolution, loss of light, and registration problems.

      The first subtractive process employing a single film strip in an ordinary projector without filters was Prizma Color in 1919. (Prizma Color had been introduced as an additive process but was soon revised.) The basis was an ingenious “duplitized” film with emulsion on both sides. One side was toned red-orange and the other blue-green. The stock long outlasted the Prizma company and was in use as late as the early 1950s in such low-cost systems as Cinecolor.

      Similar enough to provoke litigation was an early (1922) process by Technicolor in which separate red and green films were cemented back-to-back, resulting in a thick and stiff print that scratched easily. Although only four two-colour Technicolor features were produced by the end of the silent era, Technicolor sequences were a highlight of several big-budget pictures in the mid-to-late 1920s, including The Phantom of the Opera (1923–25) and Ben Hur (1925). Technicolor devised the first of its dye-transfer, or imbibition, processes in 1928. Red and green dye images were printed onto the same side of clear film containing a black silver sound track.

      When Technicolor's appeal seemed on the wane, it devised a greatly improved three-register process (1932). The perfected Technicolor system used a prism/mirror beam-splitter behind a single lens to record the red, green, and blue components of each image on three strips of black-and-white film. Approximately one-third of the light was transmitted to the film behind a green filter in direct path of the lens; the film was sensitized to green light by special dyes. A partially silvered mirror (initially flecked with gold) directed the remainder of the light through a magenta (red plus blue) filter to a bi-pack of orthochromatic and panchromatic films with their emulsion surfaces in contact. The orthochromatic film became the blue record. As it was insensitive to red light, the orthochromatic film passed the red rays to the panchromatic film. A 1938 improvement added red-orange dye to the orthochromatic film so that only red light reached the panchromatic layer. In 1941 Monopack Technicolor was introduced. This was a three-layer film from which separation negatives were made for the Technicolor dye-transfer printing process.

      Using the dye-transfer method, it was necessary to make gelatin positives that contained the image in relief. Dye filled the recesses while the higher areas remained dry. Each gelatin matrix thus imprinted its complement onto the film base. As in the two-colour process, a black silver sound track was printed first on clear film. When magnetic sound became popular, the oxide strips were embossed after printing. Technicolor gave excellent results but was very expensive.

      In 1936 Germany produced Agfacolor, a single-strip, three-layer negative film and accompanying print stock. After World War II Agfacolor appeared as Sovcolor in the Eastern bloc and as Anscocolor in the United States, where it was initially used for amateur filmmaking. The first serious rival to Technicolor was the single-strip Eastmancolor negative, which was introduced in 1952 by the Eastman Kodak Company but was often credited under a studio trademark (e.g., Warnercolor). Eastmancolor did not require special camera or processing equipment and was cheaper than Technicolor. Producers naturally preferred the less expensive Eastmancolor, especially since they had, in response to the perceived threat of television, increased production of colour films. (After the 1960s black-and-white films were so rare that they cost more to print than colour films.) The 1950s vogue for CinemaScope and three-dimensional productions, both incompatible with the Technicolor camera, also hastened the demise of Technicolor photography.

      Dye-transfer printing remained cost-effective somewhat longer, but Technicolor was forced to abandon the process in the 1970s. This has created a significant problem for film preservationists because only Technicolor film permanently retains its original colours. Other colour prints fade to magenta within seven years, yet the hard gelatin dyes of a Technicolor print remain undimmed even after the film's nitrate base has begun to decompose.

      In the 1980s computerized versions of the hand-stenciled colour films of the silent era were developed to rejuvenate old black-and-white films for video.

Elisabeth Weis Stephen G. Handzo

Wide-screen and stereoscopic pictures
      Until the early 1950s, the screen shape, or aspect ratio (expressed as the ratio of frame width to frame height), was generally 1.33 to 1, or 4 to 3. In the mid-1950s the ratio became standardized at 1.85 to 1 in the United States and 1.66 or 1.75 to 1 in Europe. These slightly wider images were accomplished by using the same film but smaller aperture plates in the projector and by using shorter-focal-length lenses.

      Many people have felt that, while vision at the extreme sides of the vision field does not usually contribute much information to the eyes, it does add substantially to the illusion of reality when it is present. Hence, there have been periods when film producers have attempted to introduce extremely wide formats. As early as 1929, Grandeur films were presented using 70-mm instead of the standard 35-mm film to give a wider field of view.

      In 1952 a radical attack was made on wide-screen projection (projection screen) in the form of the Cinerama, which used three projectors and a curved screen. The expanded field of view gave a remarkable increase in the illusion of reality, especially with such exciting and spectacular subjects as a ride down a toboggan slide. There were technical problems, including the necessity of carrying three cameras bolted together at the correct angles on the toboggan or other carrier, synchronization of the three separate films, and matching of the image structure and brightness at the joining edges on the screen. After 1963 Cinerama replaced its three-film process with a 70-mm anamorphic system with an aspect ratio of 2.75 to 1.

      The use of anamorphic lenses for wide-screen projection was introduced by CinemaScope in 1953. An anamorphic optical system photographs with a different magnification horizontally than it does vertically. The lens seems to squeeze the image so that on the film itself figures appear tall and thin. A lens on the projector reverses the effect, so that the images on the screen reacquire normal proportions.

      In 1955 Todd-AO introduced a wider film (photographed on a 65-mm negative and printed on a 70-mm positive for projection), with several sound tracks (stereophonic sound system) added. Like anamorphic systems, the wider format could be achieved with a single projector. The first two Todd-AO productions, Oklahoma! (1955) and Around the World in 80 Days (1956), were made at 30 frames per second for a nearly flicker-free image; 70-mm films are now photographed and projected at 24 frames per second.

      Amusement parks and world's fairs have often featured 360-degree projection. The first system was presented at the Disneyland amusement park in 1955. At first, the projection involved 11 16-mm projectors and screens and, later, nine 35-mm projectors. The audience stood on a low platform in the middle. The result was extremely realistic. In one scene, showing the view from a cable car in San Francisco, the viewers were seen involuntarily to lean over on the curves, as if they were actually on the cable car. The format, however, has limited uses for general storytelling.

      In the 1980s, efforts to improve picture quality took two routes: increase in frame rate (Showscan operates at 60 frames per second) or increase in overall picture size—height as well as width (IMAX and Futurevision). In these formats the sound tracks are usually printed on a separate, magnetic strip of film.

      Another project intended to improve the illusion of reality in motion pictures has been stereoscopic, or three-dimensional, cinematography. “ 3-D” films use two cameras or one camera with two lenses. The centres of the lenses are spaced 2 1/2 to 2 3/4 inches apart to replicate the displacement between a viewer's left and right eyes. Each lens records a slightly different view corresponding to the different view each eye sees in normal vision.

      Despite many efforts to create “3-D without glasses” (notably in the U.S.S.R., where a screen of vertical slats was used for many years), audience members have had to wear one of two types of special glasses to watch 3-D films. In the early anaglyph system, one lens of the glasses was red and the other green (later blue). The picture on the screen viewed without glasses appeared as two slightly displaced images, one with red lines, the other with green. Each lens of the glasses darkened its opposite colour so that each eye would see only the image intended for it.

      The Polaroid (Polaroid Corporation) system, used for commercial 3-D movies since the early 1950s, is based on a light-polarizing material developed by the American inventor Edwin H. Land (Land, Edwin Herbert) in 1932. In this method, known as Natural Vision, two films are recorded with lenses that polarize light at different angles. The lenses on the glasses worn by spectators are similarly polarized so that each admits its corresponding view and blocks the other. Early versions of Polaroid 3-D used two interlocked projectors to synchronize the two pictures. A later system, revived in the 1970s and 1980s, stacked the left and right components vertically on half-frames two sprocket holes high. The images were converged by means of a mirror and/or prism.

Professional motion-picture production

Cameras
      The principles of operation of modern professional motion-picture cameras (motion-picture camera) are much the same as those of earlier times, although the mechanisms have been refined. A film is exposed behind a lens and is moved intermittently, with a shutter to stop the light while the film is moving. In the process, the film is unrolled from a supply reel, through the intermittent to the gate where the exposure (optics) takes place, and then on to the take-up reel.

      Lenses (lens) have gone through a continuous evolution in the last half century, for both still and motion-picture photography. The two major objectives have been to focus properly all the colours of the image at the film plane (i.e., to make the lens achromatic) and to focus portions of a beam coming from different portions of the lens, the centre or the edges, at the same point on the film (i.e., anastigmatic). Both objectives require solution for as large a lens opening as possible, in order to capture maximum light for the exposure, and for as wide a field of view as will be needed in the use of the lens. In order to solve these problems, lenses have been made with more and more components. Also, more types of glass have been discovered and developed, to give better achromatic performance. It was found, about 1939, that a special coating of the glass-to-air surface of a lens component could greatly diminish reflections from this surface without affecting other properties of the lens. The use of such coatings improved image contrast by reducing the stray rays that were produced by reflections in a multiple-component lens. Coatings also reduce loss of light by reflection in the desired rays. Coating developments have permitted the manufacture of lenses with many more components than had previously been possible.

      Long experience with both motion-picture and still cameras has shown the need for a variety of focal lengths (ranging from ultrawide angle to telephoto) to photograph scenes under the best conditions. To make changing focal lengths more convenient, the lenses have sometimes been mounted on a turret, so that one out of a set of three lenses may be quickly selected. For motion pictures this would mean an interruption in the action depicted. A continuous change would be more desirable.

      When two lenses are used in a tandem combination, the focal length of the combination varies according to the separation between the two components. For example, when two thin converging lenses are mounted close together, the combined focal length is shorter than when they are separated a certain distance. Thus, the focal length of the combination can be continuously varied over a range merely by changing the separation.

      This observation led to the conception of camera lenses of variable focal length in which the variation is obtained by moving one or more elements. One simple design consists of two fixed convex (converging) lenses of unequal power with a movable concave (diverging) lens between them. When the central concave lens is located close to the front convex element, the combination focal length can be shorter (and the image therefore smaller) than when it is located close to the rear convex element. The design can be made such that, with the two convex elements remaining fixed, a distant view can remain almost in focus on the film as the middle element is moved. Exact focus for this arrangement, however, could not be attained. Thus, for lenses of this design, a cam device has in the past been provided to move the front convex element a short distance as the middle element is moved over its range, to keep the focus exact. This kind of lens has come to be called a zoom lens.

      By increasing the number of elements, the focus can be kept exact without the need of a correcting cam. Other improvements include increasing the range of focal lengths covered, increasing the effective lens aperture, increasing the angular field of view seen by the film, and improving the colour correction with radically new glass materials.

      For a long time the change in focal lengths was carried out manually. More recently, the use of an electric motor drive has allowed a smoother change, with less distraction to the cameraman.

      The general principles utilized in the film transport system have remained much the same over recent years, at least for the 35-mm film. The films are usually preloaded in lighttight reel cases (called magazines), with an exposed loop between the supply and take-up reels. This loop is quickly fitted into the camera mechanism when loading.

      The intermittent is usually a claw-type mechanism, sometimes a “dual-fork” claw that pulls down four sprocket holes at a time. The fork protrudes and recedes to engage the sprocket holes. Some cameras are equipped with pin-registering mechanisms, which hold the film firmly in place in the exposure gate, with the pins engaging sprocket holes.

      In the early days of sound films, the noise made by the intermittent and other moving parts in the camera was loud enough to interfere with the sound picked up by the microphone. Cameras were sheathed (“blimped”) with outer, separate sound-absorbing materials. The sound insulation is now usually self-contained in the camera.

      Before the introduction of sound, the film and intermittent were driven by a crank operated by the cameraman. With sound, considerably more uniformity in the speed of the film drive became necessary. For this and other reasons, the film drive in modern cameras is provided by an accurately controlled electric motor, which maintains the standardized sound speed of 24 frames per second.

      The shutter keeps light from striking the film while it is moving from one frame to the next. A variable shutter opening can also be used to reduce exposure when it is necessary or desirable to do this without reducing the lens aperture. The shutter is in most cases rotary and is synchronized with the intermittent.

      Viewfinding (viewfinder) for motion pictures is especially critical: whereas still photographs can be cropped during enlargement or printing, the film image must be framed as it will appear on the screen. Older cameras employed a mechanical “rack-over” that enabled the camera operator to sight directly through the aperture with the film transport out of the way. When an external viewfinder is used, the image seen through it is not exactly the same as that photographed. The viewfinder must be angled so that it and the taking lens both point at the centre of the subject. A system of cams (cam) in the focus mechanism of the camera keeps the viewfinder image free of parallax (viewpoint difference) by adjustment from infinity to the near-point of the lens with a separate cam for each focal length.

      Most cameras used today are of the reflex type. A partially reflecting mirror (beam splitter) is positioned in the door of the camera body or built into the lens itself with a parallel viewing tube. The mirror diverts to the viewfinder some of the light rays coming through the lens. This method's major drawback is that it takes away part of the light that would otherwise be used for the exposure. A much-admired viewing system that allows the full amount of light to reach the film is the rotating mirror shutter employed in the Arriflex camera. Light is reflected into the viewfinder only when the shutter blade covers the film as it advances to the next frame. This arrangement, however, is not wholly free from objections. Chief among these is that the arrangement opens a return path for light from the viewer's eyepiece to reach the film. The eyepiece must fit snugly around the eye while the viewfinder is in use, and the finder must be closed completely while it is not in use. In addition, since the camera shutter is closed only once per frame, the image will be subject to a distinct flicker, to which the cameraman must adjust himself. Some cameras incorporate a “video assist” or “video tap” wherein the viewfinder image is electronically fed to a video monitor or video recorder, thus allowing evaluation of the take by videotape replay.

       focusing has also been a perennial problem for the motion-picture camera. On the camera the position of the lens is precisely indicated on a calibrated scale. The actor's location on the set was formerly marked on the floor and the exact distance to the camera measured with a tape. The actor moved to previously marked places, and an assistant to the cameraman, called a focus puller, or follow-focus assistant, kept the lens in adjustment. Various electrical devices have now been introduced for remote adjustment by the assistant. Where a through-the-lens finder is used, focusing can be done directly, using the viewfinder image. Also, experienced cameramen can estimate distances quite closely.

      It is usual to generate some kind of signal in synchronism with the intermittent when an auxiliary, magnetic-tape sound recorder is used, so that the sound record can later be synchronized exactly with the picture. The sync-generator provides a record of the speed of the camera motor; each frame of picture causes 2.5 cycles of a 60-hertz pulse to be recorded on the sync-track of the sound tape. A newer system is based on the “time code” originally developed for videotape. A separate generator uses a digital audio signal to provide each frame of film with its own number. For each take the time code generator is set to zero; when the camera and film are running, the generator starts to emit numbers that represent “real-time” in hours, minutes, seconds, and frames. In one system, a light-emitting diode next to the camera aperture records the information as ordinary numbers that can be read by the eye; in others, the binary numbers are contained in a control surface of magnetic particles on the base side of the film. One hundred feet of 35-mm film would be rendered in time code as 00:01:07:08, or one minute, seven seconds, eight frames. Corresponding information is recorded on the “address” track of the audio tape. The time code's last two digits, which represent frames, go up to either 24 or 30. Material intended for theatres is photographed at the international sound projection speed of 24 frames per second. Material filmed for American television is often shot at 30 frames per second (in countries with 50 hertz AC power, the rate is 25 frames per second).

      The camera is often supplied with electric motors to perform miscellaneous functions, such as to provide smooth rotation (panning) of the camera or to change the magnification in a zoom lens (or change lenses in a turret). The camera is normally provided with footage indicators to indicate the amount of film left unexposed and with frame counters used when it is desired to superimpose a second exposure. There can also be an “inching knob” to reposition the film to a given frame for multiple exposures. When the camera is used at a speed different from standard, a tachometer may be provided to indicate the actual speed.

      The cameras that have so far been described are for the standard 35-mm film. Cameras for 65-mm film are generally quite similar, though heavier. The 16-mm professional camera may differ from the 35-mm in the form of its case, in its use of a spring-operated film drive, and in its method of film loading, as a result of its development from a former amateur camera. On the other hand it may be a smaller version and have the same features as the 35-mm model by the same manufacturer.

Camera supports
      The camera must be mounted on a substantial support to avoid extraneous movements while film is being exposed. In its simplest form this is a heavy tripod structure, with sturdy but smooth-moving adjustments and casters, so that the exact desired position can be quickly reached. Often a heavy dolly, holding both the camera and a seated cameraman, is used. This can be pushed or driven around the set. When shots from elevated positions are to be used, both camera and cameraman are carried on the end of a crane, also on a dolly. In some cases the assemblage is smoothly driven to follow the action being pictured, such as movement along a street. If the surface being traversed is not smooth, rails, resembling train tracks, must be laid on the floor or ground for the dolly. The camera may be freed from the tripod or dolly and carried by the operator by means of a body brace and gyroscope stabilizer. One such support is the Steadicam, which eliminates the tell-tale motions of the hand-held camera.

Film
 Film types are usually described by their gauge, or approximate width. The 65-mm format is used chiefly for special effects and for special systems such as IMAX and Showscan. It was formerly used for original photography in conjunction with 70-mm release prints; now 70-mm theatrical films are generally shot in 35-mm and blown up in printing. With some exceptions the 35-mm format is for theatrical use, 16-mm for institutional applications, and 8-mm for home movies. The more frequently encountered film formats are illustrated in Figure 2—>. There are some minor differences in the shape of the sprocket holes in 35-mm film between negative and positive film. The first 8-mm film was made by using 16-mm film, punched with twice as many sprocket holes of the same size and shape. One side, to the middle line, was exposed in one direction. The supply and take-up reels were then interchanged in the camera, and the other side was exposed in the other direction. After processing, the film was split into two strips, which were spliced into one. An improved version of 8-mm stock, called Super-8 film, was designed with the idea of reducing the sprocket-hole size and employing the space thus made available for a larger picture area.

      Originally, the film base was some form of celluloid or cellulose nitrate (nitrocellulose). This material is highly flammable, and extensive precautions were required in projection rooms to avoid film ignition because of the proximity of the projector arc lamp to the film. In 1923, when 16-mm amateur film was introduced, cellulose acetate (or safety film), much less flammable than the nitrate, was used. It was not considered desirable to adopt it for professional 35-mm film, largely because it was inferior in strength and dimensional stability. By the late 1930s an improved cellulose acetate safety film was introduced, and by the early 1950s it had generally replaced the nitrate film. Since 1956 acetate has lost ground to polyester- or mylar-based film, which is thinner, less brittle, and more resistant to tearing.

   The film base is coated with a light-sensitive layer of silver halide emulsion; multiple layers are used for colour film. Emulsion manufacture is quite complicated and delicate. The earlier emulsions were most sensitive to violet and blue light, as shown schematically in Figure 3—>, curve a. Toward the cyan and green, sensitivity drops rapidly. Such an emulsion is called natural, or ordinary. The result of such a characteristic is that in a natural scene reds and yellows appear black in the positive, and green appears too dark. As early as 1873 it was found that dyes (dye) introduced into the emulsion could increase the sensitivity in the yellow and green (Figure 3—>, curve b). The change increased the natural appearance of the reproduced picture, and the emulsion was called orthochromatic. Later (1904) dyes were found to prolong the sensitivity into the red, and this emulsion is called panchromatic (Figure 3—>, curve c). The dates are fairly early for motion-picture application, but the development had importance in the general technology.

      The overall sensitivity for picture taking has been increased greatly, from below about 10 ASA before 1930 to several hundred and even several thousand. The ASA (American Standards Association) scale is an arbitrary rating of film speed; that is, the sensitivity of the film to light. If everything else is kept constant, the required exposure time is inversely proportional to the ASA rating. Negative films designed for original picture exposure are usually faster (i.e., have higher ASA ratings) than those for prints and are apt to be somewhat coarser grained.

      Current technology has made use of a flatter crystal or “T-grain” that exposes more readily to light without an increase in the visible dimension of the grain. This enables use of very low light levels, especially when the film is “pushed” (given extended development) or “flashed” (prestruck with white light to accelerate exposure). When extreme sensitivity to light is not required, finer grain film may be used, particularly when it is intended to enlarge a 16-mm negative for 35-mm release or a 35-mm negative for 70-mm release.

      There are two major steps involved in making a dye image on motion-picture film. The first is to convert the negative silver image that is obtained from a normally exposed film into a positive dye image. The clue to how this can be done came from experience with a developer known as pyro ( pyrogallol), once very popular with still photographers. A negative developed with pyro developer has not only a silver image but also a brown stain. Study of the process showed that the stain was caused by oxidation products given off locally by the developer in the development process. A substance in the developer reacts with these oxidation products to give an insoluble brown dye. The substance is called a dye coupler. Since the dye is not soluble, it does not wash off in the subsequent film treatment.

      This suggested the possibility of bleaching to take away the silver image, leaving the dye image on the film. The first step was to find a developer and dye couplers that would produce the three dye colours that give a faithful three-colour picture rendering. The second step was to carry out the process in the film coating with three separate colours and keep them separate, all the way from exposure to the final three-colour image on the completed film.

      The first portion of this second step is carried out by obtaining three emulsions that can be laid on top of one another and are sensitive, respectively, to the red, green, and blue of the exposing image without interfering with each other and that give corresponding silver layers that similarly do not interfere with each other.

 It has been observed above that normal silver halide photographic emulsion is particularly sensitive to blue light and that one of the early problems was to obtain a more natural pictorial rendering by extending the sensitivity of the emulsion to green and finally to red light. The problem was solved by inserting appropriate dyes in the emulsion. The dye adds a peak of increased sensitivity, respectively, to green and red light, as in Figure 3—>. The triple-layer film then consists of, on the top, an ordinary blue-sensitive emulsion; below this, a yellow filter to cut off blue light; next below this, an emulsion with a sensitivity peak in the green, with the yellow filter cutting off blue sensitivity; and, finally, an emulsion with a peak sensitivity in the red, a valley in the green, and blue sensitivity cut off by the yellow filter. The sensitization can be chosen to locate and enhance the sensitivity peaks.

      Thus, the blue layer responds to the blue light in the original, the green layer to the green light, and the red layer to the red light. These can be given a first development together, so that the individual responses will be indicated as silver deposits in the respective layers. The developer used is one which leaves no dye-coupler stains.

      In what is called the nonsubstantive subsequent process, the dye couplers are introduced in a second development. Each colour layer is treated separately. Uniform red light is applied (from the bottom up) to expose the undeveloped silver halide in the red layer. It has no effect on the other layers because of their insensitivity to red. The film is processed with a developer containing a minus-red (or cyan) dye coupler. This leaves a silver and minus-red dye deposit wherever there was newly exposed silver halide in the red layer. Similarly, the blue layer, newly exposed with blue light from above and processed with a developer containing a minus-blue (or yellow) dye coupler, leaves a silver and minus-blue dye deposit wherever there was newly exposed silver halide in the blue layer. In the remaining green layer, a white-light exposure and development with a minus-green (or magenta) dye coupler converts the residual silver halide into a silver and minus-green dye deposit.

      All the silver deposits and the yellow filter are finally bleached out. The remaining dye deposits serve to subtract from white light, in the manner that was described earlier, the correct part of the spectrum to leave the colour of the initial exposing light. For example, where this light was red, the final dyes absorb blue and green. Of the spectrum, this therefore leaves red light to go through the film.

      In a modification called the substantive process, the appropriate dye couplers are suitably embedded in the emulsion in the appropriate colour layers to prevent their moving about during processing and contaminating the colours (an important problem). It is then possible to carry out the second exposure and development on all three layers in a single step with white light and with only one developer.

      Nonsubstantive film is essentially an amateur medium that enables the camera original to be processed as a projection print. Commercial theatrical motion pictures are photographed on a colour negative stock containing dye couplers (i.e., substantive type) from which prints can be made.

Lighting
      The art of cinematography is, above all, the art of lighting, and the British term for the chief of the camera crew, lighting cameraman, comes closer to the matter than the Hollywood director of photography. In motion-picture photography, decisions about exposure are governed by the overall style of film, and light levels are set to expose the particular film stock at the desired f-stop.

Light sources
      The earliest effective motion-picture lighting source was natural daylight, which meant that films at first had to be photographed outdoors, on open-roof stages, or in glass-enclosed studios. After 1903, artificial light was introduced in the form of mercury vapour tubes that produced a rather flat lighting. Ordinary tungsten (incandescent) lamps could not be used because the light rays they produced came predominantly from the red end of the spectrum, to which the orthochromatic film of the era was relatively insensitive. After about 1912, white flame carbon arc instruments, such as the Klieg light (made by Kliegl Brothers and used for stage shows) were adapted for motion pictures. After the industry converted to sound in 1927, however, the sputtering created by carbon arcs caused them to be replaced by incandescent (incandescent lamp) lighting. Fresnel-lens spotlights then became the standard. Fresnel lenses (Fresnel lens) concentrate the light beam somewhat and prevent excessive light loss around the sides. They can also, when suitably focused, give a relatively sharp beam. In the studio there are racks above and stands on the floor on which lamps can be mounted so that they direct the light where it is wanted. The advent of Technicolor led to a partial reversion to the carbon arc because incandescent light affected the colours recorded on the film. Around 1950, however, economic pressures caused Technicolor film to be rebalanced for incandescent light.

      The modern era in lighting began in the late 1960s when tungsten-halogen lamps with quartz envelopes came into wide use. The halogen compound is included inside the envelope, and its purpose is to combine with the tungsten evaporated from the hot filament. This forms a compound that is electrically attracted back to the tungsten filament. It thus prevents the evaporated tungsten from condensing on the envelope and darkening it, an effect that reduces the light output of ordinary gas-filled tungsten lamps. The return of the tungsten to the filament means that the incandescent lamp can be run with a long life at a higher filament temperature and, more important, remain at precisely the same colour temperature. These lamps are now sometimes provided with a special multilayered filter to give a bluish light that approaches the colour of daylight. Halogen lamps give brilliant light from a compact unit and are particularly well-suited to location filming.

      The principal light on a scene is called the key light. The position of the key light has often been conventionalized (e.g., aimed at the actors at an angle 45 degrees off the camera-to-subject axis). Another school of cinematographers prefers source lighting, in the tradition of Renaissance and Old Master paintings; that is, a window or lamp in the scene governs the angle and intensity of light. A fill light is used to provide detail in the shadow areas created by the key light. The difference in lighting level between the key plus the fill light versus the fill light alone yields the lighting contrast ratio. The “latitude” of the film, or the spread between the greatest and least exposure that will produce an acceptable image, governs the lighting contrast ratio. For many years, the latitude of colour films was so restricted that it was thought necessary to have numerically low lighting ratios, typically 2 to 1 (a very flat lighting) and never more than 3 to 1. The introduction of Eastman 5254 colour negative in 1968 and the even more sophisticated 5247 in 1974 opened a new era in which colour film was exposed with higher ratios approaching the previous subtleties of black-and-white.

Light measurement
      Precise control of exposure throughout filming is necessary to maintain consistent tones from shot to shot and to give an overall tenor of lighting that suits the pictorial style. To determine light levels in the studio and on interior locations, an incident light meter is primarily used. This type of meter is recognizable by a white plastic dome that collects light in a 180-degree pattern (the dome is an approximation of the shape of the human face). Because it measures the overall light (calibrated in footcandles) falling on the scene, it may be used without the actors present.

      Reflected light readings measure the average light coming toward the camera from the scene being photographed. This works well for average subjects but gives wrong exposures if the background contains either many bright areas, as in a beach scene, or very dark areas, as in front of a dark building. In such cases the photocell (photoelectric cell) must be held not at the camera but very close to the subject of interest, to eliminate the effect of the background. This is also the case when the scene contains a good deal of backlight. These shortcomings eventually led to the development of the spot meter (exposure meter).

      Spot measurement readings measure the light coming toward the camera from selected spots in the subject being photographed. The meter for this purpose has an optical system that covers measurement of a spot of about one degree, making it extremely useful on exterior locations.

      Light is also measurable in terms of colour temperature. Light rich in red rays has a low reading in kelvins (kelvin). Ordinary household light bulbs produce light of about 2,800 kelvins, while daylight, which is rich in rays from the blue end of the spectrum, may have readings from 5,000 to more than 20,000 K. The colour temperature meter uses a rotating filter to indicate a bias toward either red or blue; when red and blue rays are in balance, the needle does not move. Some meters also use red/blue and blue/green filters for fuller measurement.

      The general practice has been to shoot the entire picture on stock balanced for artificial light at 3,200 K. Lights for filmmaking generally range between 3,200 K and 3,400 K. For daylight shooting, an orange filter is employed to counter the film's sensitivity to blue light. Although colour-correcting filters are produced in a great many gradations, the No. 85 filter is generally used to shoot tungsten-balanced colour film outdoors. For mixed-light situations where daylight enters through windows but tungsten light is used for the interior, the practice has been to cover the windows with sheets of plastic similar in colour to the No. 85 filter. This reduces the colour temperature of the natural light to that of the artificial light. When the windows are very large, blue filters are sometimes placed on the lights and the No. 85 orange filter is used on the lens, as if filming in exterior daylight. Yet another approach is to supplement natural daylight with metal halide (daylight-balanced) lights. With the increase in location shooting, daylight-balanced high-speed films have been introduced to allow shooting in mixed-light situations without light loss due to filters.

Film processing and printing
      In the early days of motion pictures, films were processed by winding on flat racks and then dipping in tanks of solution. As films became longer, such methods proved to be too cumbersome. It was recognized that the processing system should have the following characteristics: it must run continuously; it must be lighttight and yet capable of being loaded in daylight; and it must be as compact as possible to provide a minimum air surface for the processing solutions. A general form evolved that is still in use.

      For continuous operation the film must be passed continuously through the solutions and folded back over rollers that do not touch the emulsion surface. It must be handled very carefully, as the impregnation with solution weakens the support, and the sprocket holes should not be engaged. Drive should, therefore, be accomplished by a light friction force at the edges.

      Splicing on a fresh film without affecting the motion of the part of the film being processed is handled by using a storage unit or reservoir. This reservoir has a variable capacity so that the output end can be giving out film while the input end is stationary as the new film is spliced. Lighttight gates prevent all but a short length of film being light-struck at the very beginning or end of the film (and leaders may be used). The take-up-reel case is fastened in a lighttight way to the storage unit so that after splicing, the film is unreeled into the storage and processing units until the other end is reached, ready for splicing to the next film after changing cases.

      Many tank shapes have been tried. Long vertical tanks provide for several passes of the film through each tank. The spools are designed so as to hold the film at the edges by friction. There are a number of types of drive, but all function gently to avoid strain. Sometimes the spools have multistepped edges to accommodate various film widths. The lower spools (or “diabolos”) are more or less free but guided in a loose fashion so that they will not jam or tangle. The long vertical tanks give a minimum of air surface to the solution. The motion of the film through the liquid can be sufficient for proper contact of the film with the solutions, but sometimes submerged sprays with small jets of fine nitrogen bubbles are provided to increase the agitation.

      The last receptacle in the processing sequence is a drying oven. There are several designs, some of which generally resemble the tank but without solution and are provided with heating elements. This receptacle does not need to be lighttight.

      The processing steps for the many different types of film are similar in principle, though there are variations in specific solutions and treatments. One variation is known as reversal processing. After partial development, the camera original is bleached and given a second exposure of uniform white light. This yields a positive rather than a negative image and thus saves the cost of an additional generation.

      In laboratory parlance, the major functions are divided into “front end” and “release print” work and may be performed at different facilities. Front end work begins even before shooting with tests by the cinematographer on the same film stocks that will be used for the production. These will be used as a guide when takes from the camera negative that come in from each day's shooting are printed. A colour video analyzer reads the red, blue, and green records of the tests over a range of six f-stops to establish “printer lights.” As desired, the work print may be “one light” (given uniform exposure) or “timed” (exposure corrected for scene-to-scene variations).

      The original negative is stored until postproduction is finished. Positive work print is furnished in 1,000-foot rolls for editing. When all editing, including the insertion of optical effects and titles, is completed, the negative cutter matches the original camera film frame by frame at each editing point. The edited camera negative is combined with the synchronized sound track negative into a composite print called the answer print. (The first answer print is rarely the same as the final release print.) After all colour-correction and timing takes place, the information is recorded on perforated paper tape that serves to control both the exposure for each shot and the louvered filters that add red, green, and blue values.

      For theatrical distribution, exhibition release prints are not normally struck from the original camera negative. The original negative is used to make a master positive, sometimes known as the protection positive, from which a printing negative is then made to run off the release prints. Alternatively, a “dupe” negative can be made by copying the original camera negative through the reversal process. This yields a colour reversal intermediate (CRI) from which prints can be struck.

      Printing takes a number of different forms. In contact printing, the master film (or negative) is pressed against the raw stock; this combination is exposed to light on the master film side. In optical printing, the master film is projected through a lens to expose the raw stock. In continuous printing, the master film and the raw stock both run continuously. Continuous printing is usually contact printing but can be optical, through a projected slit. In intermittent, or step-by-step, printing, each frame of the master film is exposed as a whole to a corresponding frame space on the raw film.

      It is possible to print from one size master film to another size raw stock, such as 35-mm to 16-mm, or vice versa. In such cases the printing must, of course, be optical, and in the examples cited must be intermittent if there is a sound track. This is because 35-mm sound film has a spacing between frames and 16-mm does not. The sound track must be printed separately. The preferred method for making 16-mm versions of 35-mm films is to make a 16-mm negative by reduction from the 35-mm negative. Sometimes a 35-mm release print is reduced and printed by reversal, but this yields a coarser image. When 16-mm film is “blown up,” the 16-mm negative is immersed in a solution that conceals scratches and grain as it is being rephotographed; this process is called wet-gate printing.

      Film prints to be used for projection are given a coat of wax over the sprocket-hole areas. This eases the film passage between the pressure plates at the projection aperture.

Sound-recording techniques
      The art of sound recording for motion pictures has developed dramatically. Most of the improvements fall into three areas: fidelity of recording; separation and then resynchronization of sound to picture; and ability to manipulate sound during the postproduction stage.

Optical recording (optical sound recording)
      Until the early 1950s the normal recording medium was film. Sound waves were converted into light and recorded onto 35-mm film stock. Today the principal use of optical recording is to make a master optical negative for final exhibition prints after all editing and rerecording have been completed.

      Magnetic recording offers better fidelity than optical sound, can be copied with less quality loss, and can be played back immediately without development. Magnetic tracks were first used by filmmakers in the late 1940s for recording music. The physical principles are the same as those of the standard tape recorder: the microphone output is fed to a magnet past which a tape coated with iron oxide runs at a constant speed. The changes in magnetic flux are recorded onto the tape as an invisible magnetic “picture” of the sound.

      At first the sound was recorded onto 35-mm film that had a magnetic coating. Today sprocketed 35-mm magnetic tape is used during the editing stages. For onset recording, however, the film industry converted gradually to the same unperforated quarter-inch tape format widely used in broadcasting, the record industry, and even the home. Documentary and independent filmmakers were the first to develop and use the portable, more compact apparatus. Improvements in magnetic recording have paralleled those in the recording industry and include the development of multiple-track recording and Dolby noise reduction.

Double-system recording
      Although it is possible to reproduce sound, either optically or magnetically, in the same camera that is photographing a scene (a procedure known as single-system recording), there is greater flexibility if the sound track is recorded by a different person and on a separate unit. The main professional use for single-system recording is in filming news, where there is little time to strive for optimal sound or image quality. Motion-picture sound recording customarily uses a double system in which the sound track remains physically separate from the image until the very last stages of postproduction.

      Double-system shooting requires a means of rematching corresponding sounds and images. The traditional solution is to mark the beginning of each take with a “clapper,” or “clapstick,” a set of wooden jaws about a foot long, snapped together in the picture field. The instant of clacking then is registered on both picture and sound tracks. Each new take number is identified visually by a number on the clapper board and aurally by voice. A newer version of the clapper is a digital slate that uses light-emitting diodes and an audio link to synchronize film and tape.

      Precise synchronism must be maintained between camera and recorder so that sound can be kept perfectly matched to the visuals. (Lack of perfect synchronism is most conspicuous in close-up shots in which a speaker's lips do not match his voice.) On some occasions several cameras shoot a scene simultaneously from different points of view while only one sound recording is made, or several sound records may be taken of a single shot. Thus, to maintain synchronism, all sound and picture versions of a particular scene must be recorded at the same speed; the camera and the recorder cannot fluctuate in speed. One way to achieve this is to drive all cameras and recorders from a common power supply. Alternatively, synchronization may be achieved through the automatic, continual transmission from cameras to recorders of a sync-pulse signal sent by cable or wireless radio. More convenient yet is crystal sync, whereby the speed of both cameras and recorders is controlled through the use of the oscillation of crystals installed in each piece of equipment. The most advanced system uses a time-code generator to emit numbers in “real-time” on both film and tape.

The sound recordist
      The main task of the recordist during live recording is to get “clean” dialogue that eliminates background noise and seems to correspond to the space between speaker and camera. Most of the nonsynchronous dialogue, sound effects, and music can be added and adjusted later. During shooting the sound recordist adjusts the sound by setting levels, altering microphone placement, and mixing (combining signals if there is more than one microphone). Major technical and aesthetic reshaping is left for the postproduction phase when overhead is lower, the facilities are more sophisticated, and alternative versions can be created. It is also the job of the sound personnel to record wild sound (important sound effects and nonsynchronous dialogue) and ambient sound (the inherent sound of the location). Ambient sound is added to the sound track during postproduction to maintain continuity between takes. Usually, wild sound and music are also adjusted and added then.

Microphones (microphone)
      Microphones of many different types have been used for sound recording. These may differ in sound quality, in directional characteristics, and in convenience of use. Conditions that may dictate the choice of a particular microphone include the presence of minor echoes from objects in the set or reproduction of speech in a small room, as distinct from that in a large hall. Painstaking adjustments are made by careful attention to the choice of microphones, by the arrangement and sound absorbency of walls and furniture on the set, and by the exact positioning of the actors. For recording a conversation indoors, the preferred microphone is sensitive in a particular direction in order to reduce extraneous noises from the side and rear. It is usually suspended from a polelike “boom” just beyond camera range in front of and above the actors so that it can be pivoted toward each actor as he speaks. Microphones can also be mounted on a variety of other stands. A second way to cut down background noise is to use a chest (or lavaliere) microphone hidden under the actor's clothing. For longer shots, radio microphones eliminate the wires connecting actors to recorders by using a miniature transistor radio to send sound to the mixer and recorder.

Pierre Mertz Elisabeth Weis Stephen G. Handzo

Editing
      The postproduction stage of professional filmmaking is likely to last longer than the shooting itself. During this stage, the picture and the sound tracks are edited; special effects, titles, and other optical effects are created; nonsynchronous sounds, sound effects, and music are selected and devised; and all these elements are combined.

Picture editing
      The developed footage comes back from the laboratory with one or more duplicate copies. Editors work from these copies, known as work prints, so that the original camera footage can remain undamaged and clean until the final negative cut. The work prints reproduce not only the footage shot but also the edge numbers that were photographically imprinted on the raw film stock. These latent edge numbers, which are imprinted successively once per foot on the film border, enable the negative matcher to conform the assembled work print to the original footage.

      Before a day's work, or rushes, are viewed it is usual to synchronize those takes that were shot with dialogue or other major sounds. Principal sound is transferred from quarter-inch to sprocketed magnetic tape of the same gauge as the film (i.e., 16-mm or 35-mm) so that once the start of each shot is matched, sound and image will advance at the same rate, even though they are on separate strips. Once synchronism is established, the sound and image tracks can be marked with identical ink “rubber” numbers so that synchronism can be maintained or quickly reestablished by sight.

      The editor first assembles a rough cut, choosing with the director one version of each shot and providing one possible arrangement that largely preserves continuity and the major dialogue. The work print goes through many stages from rough to fine cut, as the editor juggles such factors for each shot and scene as camera placement, relation between sound and image, performance quality, and cutting rhythm. While the work print is being refined, decisions are made about additions or adjustments to the image that could not be created in the camera. These “opticals” range from titles to elaborate computer-generated special effects and are created in special laboratories.

Editing equipment
      Rushes are first viewed in a screening room. Once individual shots and takes have been separated and logged, editing requires such equipment as viewers, sound readers, synchronizers, and splicers to reattach the separate pieces of film. Most work is done on a console that combines several of the above functions and enables the editor to run sound and picture synchronously, separately at sound speed, or at variable speeds. For decades the Hollywood standard was the Moviola, originally a vertical device with one or more sound heads and a small viewplate that preserves much of the image brightness without damaging the film. Many European editors, from the 1930s on, worked with flatbed machines, which use a rotating prism rather than intermittent motion to yield an image. Starting in the 1960s flatbeds such as the KEM and Steenbeck versions became more popular in the United States and Great Britain. These horizontal editing systems are identified by how many plates they provide; each supply plate and its corresponding take-up plate transports one image or sound track. Flatbeds provide larger viewing monitors, much quieter operation, better sound quality, and faster speeds than the vertical Moviola.

      Despite the replacement of the optical sound track by sprocketed magnetic film and the introduction of the flatbed, the mechanics of editing did not change fundamentally from the 1930s until the 1980s. Each production generated hundreds of thousands of feet of work print and sound track on expensive 35-mm film, much of it hanging in bins around the editing room. Assistants manually entered scene numbers, take numbers, and roll numbers into notebooks; cuts were marked in grease pencil and spliced with cement or tape. The recent application of computer and video technology to editing equipment, however, has had dramatic results.

      The present generation of “random access” editing controllers makes it likely that physical cutting and splicing will become obsolete. In these systems, material originated on film is transferred to laser videodiscs. Videotape players may also be used, but the interactive disc has the advantage of speed. It enables editors to locate any single frame from 30 minutes of program material in three seconds or less. The log that lists each take is stored in the computer memory; the editor can call up the desired frame simply by punching a location code. The image is displayed without any distracting or obstructing numbers on a high-resolution video monitor. The editor uses a keypad to assemble various versions of a scene. There is neither actual cutting of film nor copying onto another tape or disc; computer numbers are merely rearranged. The end product is computer output in which the “edit decision” list exists as time code numbers (see above Cameras (motion-picture technology)).

      Electronic editing also simplifies the last stage in editing. Instead of assembling the camera negative with as many as 2,000 or more splices, an editor can match the time code information on a computer program against the latent edge numbers on the film. Intact camera rolls can then be assembled in order without cutting or splicing. Electronic editing equipment has been used primarily with material photographed at the standard television rate of 30 frames per second. Material shot at the motion-picture rate of 24 frames per second can be adapted for electronic editing by assigning each film frame three video fields, of which only two are used.

Special effects
      Special effects embrace a wide array of photographic, mechanical, pyrotechnic, and model-making skills.

      The most important resource of the special effects department is the optical printer, essentially a camera and projector operating in tandem, which makes it possible to photograph a photograph. In simplest form this apparatus is little more than a contact printer with motorized controls to execute simple transitions such as fades, dissolves, and wipes. A 24-frame dissolve can be accomplished by copying the end of one film scene and the beginning of another onto a third film so that diminished exposure of the first overlaps increased exposure of the second. Slow motion can be created by reprinting each frame two or three times. Conversely, printing every other frame (skip printing) speeds up action to create a comic effect or to double the speed when filming action such as collisions. A freeze frame is made by copying one frame repeatedly.

      The optical printer can also be used to replace part of an image. For example, a high-angle long shot in a western may reveal what looks like an entire frontier town surrounded by wilderness. Rather than take the time and trouble to actually build and film on location for a shot that may last less than a minute, filmmakers can make the shot using standing sets on the studio backlot, with skyscrapers and freeway traffic visible in the distance. One frame of the original scene is then enlarged so that a matte artist can trace the outline of the offending area on paper. When the copy negative is made, the offending area is masked and remains unexposed. The negative can then be rewound to film a matte painting of suitable location scenery. In addition to combining artwork with live action, optical printing can combine two or more live-action shots.

      In the aerial image optical printer, the camera is aimed straight down at a ground glass easel on which an image is projected from below. The large image allows the artist to make a very precise alignment of the artwork and live action so that they can be filmed in one pass.

        Optical printing can be combined with blue-screen photography to produce such effects as characters flying through the air. Ordinary superimposition cannot be used for this effect because the background will bleed through as the character moves. To create a traveling matte shot, it is necessary to obtain an opaque image of the foreground actors or objects against a transparent background. This is done by exploiting film's special sensitivity to blue light. In a traditional blue-screen process the actor is posed before a primary blue background, which, to avoid shadows, is illuminated from behind (see Figure 4A—>). Eastman No. 5247 colour negative is used to film the shot because its blue-sensitive layer yields a dense black-silver image in the area of the blue screen. On the positive print, the foreground action appears against a transparent field (see 4B—>). This image, printed with red light onto high-contrast panchromatic film, produces the action, or female, matte (see 4C—>). An additional generation yields a countermatte known as the background, or male, on which the action appears as an opaque silhouette (see 4D—>). This silhouette is placed with a separately photographed background (see 4E—>) in an optical printer. In the first pass through the optical printer, the background is “printed in” (see 4F—>). In the second pass, the actor and action matte are combined and the foreground is printed in (see 4G—>). All the elements are thus composited on one film (see 4H—>). There are many variations using more or fewer generations. In some systems the foreground is printed first. With a negative, or reverse, matte, the action matte is made from the camera negative and is opaque against a transparent background. The blue-screen process, in a form more complex than that described here, was used to create many spectacular effects in such films as Star Wars (1977) and E.T.—The Extraterrestrial (1982). The term blue-screen need not be taken literally. Blue-garbed Superman required a differentiated backing, and sodium vapour (yellow) light was used on the screen to yield a transparent background for the flight scenes in Mary Poppins (1964).

      In the past two actors talking in a car were likely to be filmed in the studio using rear projection (process) shots; that is, the actors were photographed in front of a translucent screen through which previously filmed footage of passing scenery was projected. Location shooting and lightweight sound equipment have all but eliminated this formerly common practice in feature films, although it survives in television. When routine background replacement is still used in expensive productions, it is more likely to be done with blue-screen than with rear projection.

      The light loss and lack of sharpness (especially noticeable in colour) that made rear projection shots obvious has also inspired some interest in front projection. The camera is placed facing the screen, and the background projector is positioned in front of and to the side of the camera so that the beam it projects is perpendicular to the camera's line of sight. A semitransparent mirror is angled at 45 degrees between camera and projector; the camera photographs the scene through the glass while the mirror particles reflect the projection beam onto the screen. The screen is made of Scotchlite, the trade name for a material that was originally devised to make road signs that would reflect light from a car's headlight to the driver's eyes. Because camera and projector are in the same optical axis in the front projection process, the background illumination is reflected directly to the camera lens so brilliantly that it is not washed out by the lighting on the actors. The actors also mask their own shadows. Front projection was used to great effect in “The Dawn of Man” sequence in 2001 (1968) wherein a leopard's eyes lit up in facing the camera. Scotchlite screens have been used to reflect powerful lights that have been shone through tanks of dyed water to produce large-scale blue-screen effects.

      To reduce the graininess that each generation of film adds to the original, concerns such as George Lucas' (Lucas, George) Industrial Light and Magic produce their effects on 65-mm film. Others, notably Albert Whitlock, have revived the old practice of making matte effects on the camera negative. In the silent film days, this was achieved using a glass shot in which the actors were photographed through a pane of glass on which the background had been painted. The Whitlock method employs a black matte in front of the camera. A hole is cut in the matte to expose the live action, which may account for only a small portion of the image. The partially exposed negative is rewound, and the background is photographed from a matte painting on glass on which the corresponding area of live action is absent.

      Miniatures (scale models) are often used in special effects work because they are relatively inexpensive and easy to handle. Great care is needed to maintain smooth, proportionate movement to keep the miniatures from looking as small and insubstantial as they really are. Models may be filmed at speeds greater than 24 frames per second (i.e., in slow motion) to achieve more realistic-looking changes in perspective and time scale. John Dykstra's Apogee, Inc., is a leader in the field of motion control, the use of computer-controlled motors to regulate the movement of models and camera in relation to one another, thereby improving the illusion of motion. The model aircraft or spacecraft can even be made to swoop and turn as they approach the camera.

      Until recently it was difficult to introduce camera movement into special effects shots. Limited camera movement was achieved by moving the camera in the optical printer, thereby creating an optical zoom, but this method did not create a convincing illusion of three-dimensionality because the foreground and background elements, as well as the grain pattern in the film, were enlarged or reduced at the same rate. When a crane or dolly was used to shoot the live portion of the scene, the background had to be animated frame-by-frame, involving considerable expense in draftsmanship. Computer-enhanced animation has made it possible to store and recall the algorithms needed to model shapes and surfaces at varied perspectives.

      The increased interface of film and video techniques has great implications in the effects area. The ease with which colour components can be separated and reformed makes the electronic medium especially well suited to blue-screen and similar image replacement techniques. The creation of mattes through computer graphics rather than the laborious process of laboratory development is an obvious area of cost savings. Digital image storage on laser videodiscs, as in the Abekas system, enables images to be manipulated with ease.

Sound editing
      Less than 25 percent of the sound track of a feature film may have been recorded at the time of photography. Much of the dialogue and almost all of the sound effects and music are adjusted and added during postproduction. Most sound effects and music are kept on separate magnetic tracks and not combined until the rerecording session.

      Because of drastic changes in microphone placement from one shot to another, excessively “live” acoustics, background noise, and other difficulties, part or all of the dialogue in a scene may have to be added during postproduction. Production sound is used as a cue or guide track for replacing dialogue, a procedure commonly known as dubbing, or looping. Looping involves cutting loops out of identical lengths of picture, sound track, and blank magnetic film. The actor listens to the cue track while watching the scene over and over. The actor rehearses the line so that it matches the wording and lip movements and then a recording is made. The cutting of loops has largely been replaced by automatic dialogue replacement (ADR). Picture and sound are interlocked on machines that can run forward or backward. In the 1980s digitalized systems were developed that could, with imperceptible changes in pitch, stretch or shrink the replacement dialogue to match the waveforms in the original for perfect lip sync.

      Dubbing also refers to the process of substituting one language for another throughout the entire picture. If this is to be done credibly, it is necessary to make the speech in the second language fit the character and cadence of the original. If the actor's face is visible in the picture it is also necessary to fit the words of the translation so that the lip movements are not too disparate. In the United States and England pictures intended for foreign distribution are prepared in a version with an M&E (music and effects) track separate from the dialogue to facilitate dubbing. In certain other countries, notably Italy, most dialogue recorded during production is meant merely to serve as a guide track, and nearly all sound is added during postproduction. One last form of speech recorded separately from photography is narration or commentary. Although images may be edited to fit the commentary, as in a documentary using primarily archival footage, most narration is added as a separate track and mixed like sound effects (sound effect) and music.

      All sounds other than speech, music, and the natural sounds generated by the actors in synchronous filming are considered sound effects, whether or not they are intended to be noticed by the audience. Although some sounds may be gathered at the time of shooting, the big studios and large independent services maintain vast libraries of effects. Still other effects may be generated by re-creating conditions or by finding or creating substitute noises that sound convincing.

      An expedient way of generating mundane effects is the “foley” technique, which involves matching sound effects to picture. For footsteps, a foley artist chooses or creates an appropriate surface in a studio and records the sound of someone moving in place on it in time to the projected image. Foleying is the effects equivalent of looping dialogue.

      Background noise (room tone or presence) from the original location must be added to all shots that were not recorded live so that there is continuity between synchronous and postsynchronized shots. Continuous noises, such as wind or waves, may be put on separate tracks that are looped (the beginning of a track is spliced to follow its end), so that the sound can be run continuously.

      Sound effects can be manipulated with the use of digital (digital sound recording) technology known as audio signal processing (ASP). The sound waveform is analyzed 44,000 times per second and converted into binary information. The pitch of a sound may be raised or lowered without altering the speed of the tape transport. Thus, engineers can simulate the changes in pitch perceived as an object, such as an arrow or vehicle, approaches and passes the camera. Sounds may be lengthened, shortened, or reversed without mechanical means. Some digital systems enable engineers not only to alter existing sounds but also to synthesize new sound effects or music, including full symphonic scores.

      There are two basic kinds of music; underscoring is usually background orchestration motivated by dramatic considerations, and source music is that which may be heard by the characters. Neither is likely to be recorded during shooting. Because a performance is usually divided into separate shots that take minutes or hours to prepare, it would be extremely difficult to produce a continuous musical performance. Thus, most musical numbers are filmed to synchronize with a playback track. The songs and accompaniment are prerecorded, so that during filming the musician is mouthing the words or faking the playing in time to the track recorded earlier.

      Whether music is chosen from music libraries or specially composed for the film, it cannot be prepared until the picture has been edited. The first step in scoring is spotting, or deciding which scenes shall have music and where it is to begin and end. The music editor then uses an editing console to break down each use of music, or cue, into fractions of seconds. Recording is done on a recording stage, with individual musicians or groups of instruments miked individually and separated from one another, sometimes by acoustical partitions. In this case the conductor's function of balancing the instrumentalists may be left to the scoring mixer, who can adjust each track later.

Mixing
      The final combination of tracks onto one composite sound track synchronous with the picture is variously known as mixing, rerecording, or dubbing. Mixing takes place at a special console equipped with separate controls for each track to adjust loudness and various aspects of sound quality. Although some of the new digital processes employ the record-industry technique of overdubbing, or building sound track-by-track onto a single tape, most mixing in films is still performed by the traditional practice of threading multiple dubbing units (sprocketed magnetic film containing separate music, dialogue, and sound effects elements) on banks of interlocked dubbers. The playback dubbers are connected by selsyn motors to one another, as well as to the rerecorders that produce the master, or parallel music/dialogue/effects (M/D/E), track on full-coat magnetic stock. Also in interlock are a projector that allows the mixer to work from the actual image and a footage counter that allows the mixer to follow cue sheets, or logs, which indicate by footage number when each track should be brought in and out.

      The mixer strives to strike the right dramatic balance between dialogue, music, and effects and to avoid monotony. Mixing procedures vary widely. Some studios use one mixer for each of the three main tracks, in which case the effects tracks have probably been mixed down earlier onto one combined track. In the early days of magnetic recording, stopping the rerecording equipment produced an audible click on the track; if a mistake were made, mixing would have to be redone from the beginning of the tape reel. The advent of back-up recording in the 1960s eliminated the click, making it possible for mixers to work on smaller segments and to correct mistakes without starting over. This enables the mix to be controlled by one person, who may be combining as many as 24 tracks. An even greater advance is the computerized console that enables the mixer to go back and correct any one track without having to remix the others.

      For monaural release, a composite music/dialogue/effects master on full-coat 35-mm magnetic film is converted to an optical sound negative. For stereo, four-track submasters for M/D/E are mixed down to a two-track magnetic matrix encoded to contain four channels of sound information. Optical sound negatives are copied from the magnetic master, and they are then composited with the picture internegative so that they are in projection sync (on 35-mm prints the sound (sound recording) is placed 21 frames in advance of its corresponding image; on 16-mm prints the sound is 26 frames in advance of the picture).

      Because of narrow track width, optical stereo sound tracks require a system of noise reduction such as Dolby Type A. The Dolby system works by responding to changing amplitudes in various regions of the frequency spectrum of an audio signal. The quieter passages are boosted to increase the spread between the signal (desired sound) and the unwanted ground noise. When played back, normal levels are restored, and the ground noise drops below the threshold of audibility.

Projection technology and theatre design
  Projectors (projector). The projector is the piece of motion-picture equipment that has changed the least. Manufacturers produce models virtually identical to those of the 1950s, and even the 1930 model Super Simplex is still in wide use. The essential mechanism is still the four-slot Maltese cross introduced in the 1890s. The Maltese cross provides the intermittent Geneva (Geneva mechanism) movement that stops each frame of the continuously moving film in front of the picture aperture, where it can be projected (or, in a camera, exposed). The movement starts with a continuously rotating gear and cam (see Figure 5—>, left). Each 360-degree rotation of the gear and cam causes a pin to engage one of the slots of the Maltese cross. The pin rotates the cross, which in turn rotates a shaft, one quarter turn. As the shaft rotates, four of the 16 teeth on the intermittent sprocket advance and engage the perforations (sprocket holes) on one frame of the film. The sprocket moves only when the pin is fully engaged in the Maltese cross slot (see Figure 5—>, right). This is the “pull-down” phase; in the other phases the curved surfaces of the cam and the cross are in contact and the movement is in the “dwell” position. The Geneva movement is also called a 3:1 movement because there are three quarter-cycles of dwell for every one quarter-cycle of pull-down.

 Sound, unlike images, cannot be reproduced intermittently; sound must be continuous to be realistic. The optical-sound-reading equipment on a projector is therefore located below the picture aperture (see Figure 6—>), and the sound on an optical 35-mm print is located 21 frames ahead of its corresponding image. A light beam (supplied by a direct current for stability) is shone through a rectangular slit and focused by a lens to dimensions of .001 by .084 inch onto the sound track. The sound track's varying bands of light and dark then modulate the amount of light from the beam that is allowed to pass to the optical pickup. In older equipment this pickup was a photoelectric cell that changed electrical resistance under exposure to light. Newer designs employ a solar cell of photovoltaic material to convert light energy to electric energy.

      An important element of picture quality on the screen is brightness. For decades the standard light source was the carbon-arc lamphouse, which used disposable electrodes (positive and negative carbon-clad rods) that would be moved together as they burned; the rods needed to be replaced every hour or so. Xenon (electric discharge lamp) lamps were introduced in West Germany in the 1950s, and carbon-arc projection is now found only in older theatres. Both carbon-arc and xenon lamps are run off a direct-current power supply in order to minimize brightness variations due to fluctuations in voltage. The xenon bulb replaces the positive and negative carbons with a tungsten anode and cathode in a quartz envelope filled with xenon gas under pressure. Light from xenon bulbs has a colour temperature closer to that of daylight than carbon-arc light does; that is, it is bluer and is therefore particularly well suited to colour films.

Projection techniques
      A 35-mm exhibition print is furnished to the theatre mounted on 2,000-foot (22-minute) reels. Thus, a typical feature film consists of five or six reels. For decades, the 2,000-foot reel was the basic unit of projection, and each screening required four or five changes of projector. Circular cue marks printed in the upper right corner of the picture indicated when each changeover should take place. Today the 2,000-foot reel is used primarily in single-screen theatres and in archival and repertory theatres that may present only a single screening of a film. Theatrical exhibition increasingly requires the film to be “made up”—that is, reels must be spliced together to enable the projectionist to make a single changeover between large reels or to use external transports that contain an entire feature without changeovers. For the former, a feature film of six 2,000-foot reels would be reassembled onto two 6,000-foot reels with a running time of about an hour each. The changeover is made by the traditional switching method using the cues at the end of the reel or by attaching a strip of foil sensor tape to the edge of the film, where it activates the appropriate switching relays. Coming attractions (“trailers”) and announcements (“snipes”—e.g., “No Smoking” or “Starts Friday”) are spliced in sequence at the head of the first reel or may be on a separate reel. Up to three auditoriums (auditorium) may be served from a common booth when large reels are used.

      The advent of xenon lamps made it possible to reduce or eliminate changeovers to the point where a single projectionist could operate the equipment for several auditoriums. Although there was an occasional theatre with more than one screen in the days of carbon-arc projection, it is xenon projection that truly began the age of multiplex cinemas. With more than three screens, equipment popularly known as the flatbed, or platter, system is mandatory. The entire film is shown without changeovers and does not need to be rewound. The most advanced version of the platter eliminates the need for rethreading. The last frame of film is spliced to the first, as in the Edison Kinetoscope.

Sound reproduction
      Theatre sound systems are divided into the “A” chain and “B” chain. The “B” chain components are the power amplifiers and speakers (loudspeaker) that, although specially made, are not essentially different from those in other audio systems. The “A” chain components are the optical pickup and preamplifier and employ some principles unique to motion pictures.

      The simplest and most common sound system employs a single amplifier channel and one speaker behind the screen. Stereo variable area (SVA), popularly known as Dolby, though in fact made by several manufacturers, employs a split optical pickup for two sets of wires for the left and right channels. Three stage speakers (left, right, and centre) are mounted behind the screen, and an array of speakers is spread along the side and rear of the auditorium for “surround” sound. Most feature films are prepared so that dialogue issues from the centre speaker, music and on-screen sound effects from the left and right, and off-screen sounds from the surrounds. A processor decodes the four channels from dual variable area tracks; information appearing on the left track is sent to the left speaker, on the right track to the right speaker, while information on both tracks is combined in the centre channel. The surround channel is derived from inversion phase relationships between the left and right tracks.

      In monaural systems, a treble cut is employed in accordance with the Standard Electrical Characteristic of 1938, or Academy Curve, so that frequencies above 8,000 hertz (Hz) are “rolled off.” This practice dates from an era when sound tracks had a large degree of ground noise and vacuum tube amplifiers produced an audible hiss concentrated in the upper frequencies. A treble boost is added during rerecording so that monaural sound tracks sound shrill and sibilant when played without the Academy filter. The introduction of Dolby noise reduction in conjunction with optical tracks made it possible for frequencies to range up to about 12,000 Hz. With the replacement of tube power amplifiers by solid state ones, large wattages are easily obtainable, and theatre sound is generally louder than it was formerly. The normal level for dialogue in a monaural film is 80 decibels (dB) in the centre of the auditorium; the normal Dolby level is 85 dB, or nearly double that.

      SVA is a direct replacement for the four-track magnetic sound introduced in 1953 in conjunction with CinemaScope. Today, magnetic sound is used only with 70-mm prints where six tracks are contained in four stripes of magnetic oxide embossed on the film. The magnetic reproducer, called a penthouse, is mounted above the projector. On a magnetic print, the sound displacement is behind the picture (28 frames in 35 mm and 23 frames in 70 mm).

      Until recently, theatre speakers were not capable of reproducing sounds below 80 Hz. The standard theatre speaker was a two-way system with a high-frequency horn mounted atop a cabinet containing a wide, shallow paper cone woofer. The impetus given to 70-mm six-track sound by the great success of Star Wars led to the development of the THX system for exhibition. In the six-track system, five stage speakers are mounted in a flat baffle wall behind the screen; each has double 15-inch woofers for low-frequency reproduction down to 40 Hz. For frequencies down to 30 Hz, sub-woofers are connected to a bass extension module that augments signals below 100 Hz on the tracks. At this level, sound is not heard but felt as vibration in the viewer's diaphragm. The THX system delivers undistorted sound up to a level of 108 dB per channel.

Auditorium design
      The most crucial consideration of theatre design is the relationship of picture size to the seating area. In the 1940s the Society of Motion Picture Engineers propounded the “two and six rule,” which stated that the first row of seats should be at a distance from the screen equal to twice the picture width and the last row at six picture widths. This rule was based on the Academy picture ratio of 1.33 to 1, which is no longer used except for revival showings. The rule is still valid, however, because the wide-screen formats derive their impact from extension of the picture into the viewer's peripheral vision, and proper installation will maintain constant picture height through all formats.

      Depending upon the seating capacity of the auditorium, the image may be made larger or smaller by changing the focal length of the lens. The lens size is calculated by multiplying the “throw” (distance from lens to screen) by the width of the aperture and dividing the total by the picture width. Thus, to produce a picture 18.5 feet wide in 1.85 format (aperture width .825 inch) in an auditorium having a 90-foot throw would require a 4-inch lens.

      The recommended level of screen brightness is 16 foot lamberts in the centre of the screen (with no film in the aperture), but a level of 12 to 14 foot lamberts is more typical for commercial cinemas. It is difficult to illuminate a large picture, because screen brightness decreases in proportion to the square of the increase in screen size; i.e., the light source used to produce a 30-foot-wide picture will have to be not twice but four times as bright as that for a 15-foot image.

      Light from the screen is wasted if it comes back over the heads of the audience, is too low down, or is too far to the sides. Light may be conserved, at the expense of even illumination, by the use of various screen surfaces. The ordinary matte-white screen exhibits approximately the same level of brightness at wide angles as from the centre axis. It is possible to increase the light reflected to the centre axis by using pearlescent screen surfaces that contain a brightness enhancing agent. Such screens conserve light but cannot be used in a theatre with a wide audience area. Another screen surface is the aluminized, or silver, screen associated with old-style movie palaces with very long throws. This screen is even brighter than the pearlescent version but loses its brightness markedly if viewed from beyond 20 degrees from the centre axis. It is mandatory for 3-D presentation, however, because an ordinary white screen depolarizes the light.

      Theatre screens are perforated to allow transmission of sound from speakers behind the screen. The perforations account for only about 8 percent of the screen surface and do not substantially degrade the picture.

      Reverberation times in excess of one second degrade speech intelligibility from the speakers. Very large, old theatres built for vaudeville and live musical accompaniment of silent films have high ceilings and large interior volumes that produce reverberation times of two seconds or more. Well-designed theatres employ curved, often serrated walls and avoid parallel walls and right angles that can produce short-path reflections.

Elisabeth Weis Stephen G. Handzo

Motion pictures for scientific purposes
      As soon as motion pictures were invented, they were applied in the recording of scientific phenomena. The recording of an experiment in which a number of things happen at about the same time is especially appropriate for motion-picture recording.

      There are many occasions on which the cinematography can be carried out at normal speeds. There are other situations, however, in which the changes occur very slowly, so slowly that the eye does not discern the change. One example is the opening of a flower blossom. In such a situation the technique used is to take successive pictures at intervals of, for example, an hour, taking great care not to move the camera or the plant and to project the resulting film at normal motion-picture speed. The projected picture will disclose many details in the development from the bud to the completed flower that are not apparent in ordinary visual observation. Other phenomena can be studied in this way.

      These techniques require merely a standard camera that can take single exposures, plus a timed triggering device that can take the exposures at the desired intervals. The rest of the technique is mostly a matter of preventing undesired motions of camera and subject.

High-speed cinematography
      Motion pictures have also been used to study phenomena that occur so fast that they cannot be recorded on normal cameras. An immense amount of ingenuity has been applied to the solution of many problems in this field.

Optical systems outside the camera
      Shadowgraph film can show sharp shadows of fast-moving bodies, indicating their speed and any changes of attitude they undergo. With a rapid enough exposure this procedure can also show the sharp shadow of a wave front in air.

 Schlieren optics show changes in the condition of a test area of space in the optical path, even when it remains fully transparent. A simplified outline of the arrangement is shown in Figure 7—>. A source sends out a beam of light through the apparatus to a film. Half the field is cut off by a knife-edged screen, K1, which is imaged by a lens, L1, to a position, K1′, in the same plane as a matching screen, K2. The knife-edges of image and screen exactly coincide. A test position, T, is sharply focused on the film. Thus the light flow past K1 to K2 is all cut off from the film if the space at T is completely uniform. Any nonuniformity, such as caused by a wave front in the air at T, causes a scattered light beam to evade the screen, K2 (path a), and reach the film.

 The Schlieren interferometer is a modification of the system shown in Figure 7—>. An apparatus that polarizes the light beam—separating it into two slightly different paths and then reintegrating it into one path—is inserted before and after the area T to bring out details of the phase of the light as it progresses from left to right. These details appear in the form of light and dark phase bands, or fringes. They outline the disturbance of the wave by any nonuniformity existing at T. Counting the number of fringes displaced gives a measure of how great the nonuniformity is. The arrangement is especially adaptable to detecting and measuring irregular flow phenomena and wave fronts in the air (or another fluid) at T.

      A spectrum of a self-luminous subject is often obtained. Sometimes the spectrum varies quickly, as in an explosion, and it is desired to study the variation. The light output from the spectrometer is led into a motion-picture camera that can handle the speed needed, and a film record is taken. The film can then be studied at leisure.

      The normal-speed camera records on the film a succession of frames, each showing a complete picture but taken at successive instants of time. Not all high-speed cameras record in this way, but some do, and the various methods of separating the frames and recording them represent key elements in the technology.

      When a cube (or many-sided prism) is rotated in the optical path in a camera, it moves the image periodically past the picture aperture. If the film is moved in the same direction at the same speed, the image, while it is appearing, is fixed on the film. The problems are that, during its appearance, the image does not move with an exactly constant velocity, and the prism introduces some optical distortion of the image. These can be partially remedied by a variety of modifications, but there are still residual imperfections. The rotating prism device is used, however, up to about 10,000 frames per second. A variation, in which an internal mirror drum with many facets is used instead of a glass prism, is usable up to about 40,000 frames per second. In evaluating the performance of the system, the size of the frames and sharpness of the images must be considered, as well as the number of frames per second.

 Rotating-mirror systems take a number of forms, but Figure 8—> shows a reasonably typical arrangement. The objective lens forms a primary image at the rotating-mirror position. The mirror reflects this image through an arc of fixed, individual relay lenses in succession on the stationary film. The relay lenses having the primary and final images both in focus keep the final image fixed at each place in succession on the film as the beam traverses each lens. With some rotating-mirror systems it is possible to go to some 5,000,000–10,000,000 frames per second (again keeping in mind that frame size and image sharpness are important as well). The arrangement wastes light, so that bright original subjects are necessary for a satisfactory exposure. The system has been used to study explosions and plasmas and to study subjects by reflected light with very high-intensity illuminants.

      The central idea of the image dissector technique is to cut up a picture in the same way that a television picture is cut into horizontal scanning lines. The bright lines are made very narrow, with black or empty spaces between them, which permits superimposing other, similar pictures—also cut into fine lines separated by empty spaces. A number of pictures can be superimposed in this way without the lines interfering with each other. Lenticulations (embossed lenslike shapes) on the film form the scanning lines and concentrate the incoming light into fine lines on the film, according to where the light comes from on the objective lens.

      The method will also work with two sets of lenticulations, one horizontal and the other vertical and superposed. This permits the storing of more pictures on the record film—in this case, as dots instead of thin lines. A number of devices based on these general ideas have been used. The frame speeds achievable vary with the specific arrangements but are generally lower than with the moving-mirror mechanisms.

 The number of frames in Figure 8—> may be increased to infinity by changing the optical system to put the primary image on the film and leaving out the relay lenses or, more simply, to replace the mirror with the film on a rotating drum. This requires compressing the individual frame from a two-dimensional into a one-dimensional image or slit. The equipment becomes a “streak camera.” It is useful when there are one or more distinct demarcations in the one-dimensional view of the subject and it is desired to record their motion along a more or less straight line. This is useful, for example, in recording the motion of the front of an explosion cloud or of a projectile. It leaves no unexplored areas of time, as does a framing record.

      The need for sufficient light is a most important requirement in high-speed cinematography. This need is so great that it is often undesirable to keep the light on longer than absolutely necessary, so that in many cases the light being turned on and off acts as a shutter. High-intensity, short-time sources have been studied extensively. Most sources consist of electrical discharges or, in some cases, arcs in air or a gas. In most cases they involve sophisticated electronic control methods. In cases in which the original subject is sufficiently self-luminous, such as in explosions, a fast shutter may be required. Since mechanical shutters do not go to very high speeds, various types of electro-optical shutters are used. On some occasions it is necessary to amplify the light after it has left the subject and before it reaches the film.

Pierre Mertz

 The basis of all animation is the building up, frame by frame, of the moving picture by exact timing and choreography of both movement and sound. All film movement is achieved by projecting during every second of time a certain number of frames, normally 24, each a still photograph minutely varied from its predecessor, which record the successive phases of the subject's movement before the camera. The same motion, or a stylized or caricatured version of it, can be achieved by “stop-motion” or “stop-action” cinematography, the frame-by-frame photographing of a similarly phased series of drawings (see Figure 9—>) or the phased movement of such objects as puppets, marionettes, or commercial products. And, as in live filming, the camera itself can create movement by tracking into a scene or panning across it. The great majority of animated films are short and have always been so for obvious reasons. When each second of action requires, for the fullest animation, 24 adjustments of the image, a minute's action may call for many hundreds of drawings (drawing).

      The range of techniques in animation production is broad. The basic form is the simple, outlined figure, however, that moves against a simple, outlined background.

Figural basis of animation
      The development of cel (or cell) animation permitted the phased movements of the figures to be traced onto a succession of transparent celluloid sheets and superimposed, in turn, onto a single static drawing representing the background. With this technique the background could be drawn in somewhat greater detail and tonal qualities introduced through shading, while the figure itself became a black silhouette, blotting out the background when the cels were superimposed. Multiple cel animation—the superimposition of several cel layers, each carrying different figures or parts of figures requiring special care in animation—allowed increased complexity in the image with minimum work load for the artist-animators. With the more modern forms of colour film introduced in the early 1930s, opaque paints and coloured inks could be used on the cels. Cel animation required the use of a so-called rostrum camera, which photographs downward onto the background with its series of superimposed cel layers pegged into place to secure accurate registration.

Noncellular animation
      Other forms of animation include silhouette animation, developed by Lotte Reiniger in Germany during the 1920s. It uses jointed, flat-figure marionettes whose poses are minutely readjusted for each photographic frame. Movement is similarly simulated in puppet animation, which photographs solid three-dimensional figures in miniature sets. The puppets are often made of a malleable yet stable material, such as clay, so that the carefully phased movements may be adjusted between the exposures of successive frames. Even people may be photographed frame by frame, as in the so-called pixilation process used by the Canadian filmmaker Norman McLaren in his short film Neighbors (1952), which makes human beings look like automatons.

      Although abstract animation can be realized through orthodox animation techniques (as in parts of Fantasia, 1940), it may also be inked or painted directly onto the film. This form of abstract animation was pioneered in the 1920s with the individual and collaborative work of the German Hans Richter and the Swede Viking Eggeling and continued in the 1930s with the films of Len Lye, a New Zealander also known for his abstract sculpture. McLaren, too, experimented with a wide range of techniques for animating directly on film; he even created many of his scores by stenciling directly onto the sound track rather than recording in the traditional manner. Since the 1970s, computers have often been used to generate abstract or stylized patterns, and means were developed to circumvent photography by transferring the results directly to 35 mm.

Planning
      The preparation of these films, whatever their length or form, follows a similar process. First comes the story, plot, action, or situational idea, which may be a written treatment with or without supporting sketches. It describes the continuity of what it is proposed should take place on the screen, the nature of the cartoon or puppet characters, the graphic stylization of the film as a whole, and similar considerations. Such a treatment, perhaps very brief, precedes any fuller scripting or other elaboration that may take place.

      Since visual emphasis is the key to animation, and sound its close counterpart, the sooner ideas are translated into pictures the better. The “storyboard” provides the continuity of the action, which is worked out scene by scene simultaneously with the animation script. In the storyboard the story is told and to some extent graphically styled in a succession of key sketches with captions and fragments of dialogue, much like a cartoon strip but with much fuller treatment. A feature-length film could easily require a final continuity of several hundred such sketches.

      Meanwhile, an animation director is also preparing modeling drawings for the principal characters and drawings establishing the backgrounds, or settings, for the film. These begin to indicate the general graphic style and, when colour is involved, the colour scheme and decor to be used. The modeling drawings must indicate the nature and temperament of the characters as well as their appearance when seen from a variety of angles and using a number of characteristic gestures. These will act as guides for the key animators, who with their assistants must bring the figures to dramatic life through the succession of final drawings created on the drawing board.

      Animated films are, in effect, choreographed; since mobility involves time, the movements must be exactly timed and so deployed through the right number of successive drawings, like notes in music deployed through bars in a score. When the characters speak or sing, their lip movements must be synchronized with the words they appear to utter. When sound tracks, both dialogue and music, are prerecorded, the animators have an exact time scheme to follow; if the tracks are not prerecorded, then the “scoring” of the action will control the subsequent timing of the speech and music at recording stage. The timing in either case is predetermined on paper in a workbook, which grades the progression of the animators' drawings frame by frame with the same precision as a musical score. A similar control in the form of a time chart may be created by the director as a guide for the composer. A third control, the so-called dope sheet or camera exposure chart, guides the rostrum cameraman in the frame-by-frame setups and sequence of cels or backgrounds.

Execution
      When the exacting labour of animation is under way, difficult moments in the choreography of the figures may be “line-tested”—that is, outlined in pencil, photographed, and tested out on the screen for rhythm and characterization. The key, or senior, animators draw, or “cartoon,” the highlights, or salients, of the movement, perhaps the five or more drawings out of the 24 per second that will give the special edge of liveliness or characterization to the movements. Assistant animators, sometimes called in-betweeners, close the gaps by completing the intermediate drawings. The smaller the animation unit, the greater the burden each artist has to bear in the preparation of final drawings. These drawings, the backgrounds of which remain on drawing paper, are transferred to the cels by specialized artists, who trace the animators' work and paint over it with opaque colouring. The work of tracing and painting can be saved when the animators draw directly on the cels with coloured chinagraph pencils, which they can rub out or correct without harm. When the picture track and the sound track with speech, sound effects, and music dubbed together are completed under the control of the director and the editor, a “married print” can be made, with the track recorded optically.

Newer techniques
      Efforts to lessen the extraordinary labour and costs of animation have taken two basic directions: simplification and computerization. Inexpensive cartoons made for television have often resorted to “limited animation,” in which each drawing is repeated anywhere from two to five times. The resultant movements are jerky, rather than smoothly gradated. Often only part of the body is animated, and the background and the remaining parts of the figure do not change at all. Another shortcut is “cycling,” whereby only a limited number of phases of body movement are drawn and then repeated to create more complicated movements such as walking or talking.

      Although computers can be used to create the limited animation described above, they can also be used in virtually every step of sophisticated animation. Computers have been used, for example, to automate the movement of the rostrum camera or to supply the in-between drawings for full animation. If a three-dimensional figure is translated into computer terms (i.e., digitized), the computer can move or rotate the object convincingly through space. Hence, computer animation can demonstrate highly complex movements for medical or other scientific researchers. Animators who work with computers usually distinguish between computer-assisted animation, which uses computers to facilitate some stages of the laborious production process, and computer-generated (computer) animation, which creates imagery through mathematical or computer language rather than through photography or drawing. Finally, computers may be used to modify or enhance a drawing that has been initiated in the traditional manner.

Roger Manvell Elisabeth Weis

Additional Reading
The Focal Encyclopedia of Film & Television Techniques (1969), is a fairly complete reference source. Raymond Fielding (comp.), A Technological History of Motion Pictures and Television: An Anthology from the Pages of the Journal of the Society of Motion Picture and Television Engineers (1967, reprinted 1983), provides a remarkable survey. The society's own invaluable publications include Don V. Kloepfel (ed.), Motion-Picture Projection and Theatre Presentation Manual (1969); Frank P. Clark, Special Effects in Motion Pictures: Some Methods for Producing Mechanical Special Effects (1966); and Widescreen Motion-Picture Systems (1965). See also Raymond Fielding, The Technique of Special Effects Cinematography, 4th ed. (1985). Other works include Barry Salt, Film Style and Technology: History and Analysis (1983), the evolution of film equipment; Steve Neale, Cinema and Technology: Image, Sound, Colour (1985), an economic and aesthetic context for the emergence of the motion-picture technologies; and R.W.G. Hunt, The Reproduction of Colour, 3rd ed. (1975), an extended discussion of specific technology. Charles G. Clarke, Professional Cinematography, rev. ed. (1968), is an older but still valuable brief summary; and Fred H. Detmers, American Cinematographer Manual, 6th ed. (1986), is a later informative handbook.Dominic Case, Motion Picture Film Processing (1985), is a definitive text. Paul M. Honoré, A Handbook of Sound Recording: A Text for Motion Picture and General Sound Recording (1980), covers sound production. Glen Ballou (ed.), Handbook for Sound Engineers: The New Audio Cyclopedia (1987), is an extended reference manual. Works on editing techniques include Karel Reisz and Gavin Millar, The Technique of Film Editing, 2nd enlarged ed. (1968, reprinted 1982); William B. Adams, Handbook of Motion Picture Production (1977); and Ernest Walter, The Technique of the Film Cutting Room, 2nd rev. ed. (1982). Detailed techniques in high-speed and scientific cinematography are discussed in J.S. Courtney-Pratt (ed.), Proceedings of the Fifth International Congress on High-Speed Photography (1962); and William G. Hyzer and William G. Chace (eds.), Proceedings of the Ninth International Congress on High-Speed Photography (1970). John Halas and Roger Manvell, The Technique of Film Animation, 4th ed. (1976), is the standard text on the subject; in Art in Movement: New Directions in Animation (1970), the same authors explore the link between animation and kinetic art forms. Thomas W. Hoffer, Animation, a Reference Guide (1981), is a scholarly guide with many bibliographic essays. For information on computer graphics, see Proceedings of the Conference of the National Computer Graphics Association (annual). Information on state-of-the-art technologies is provided in the following monthly periodicals: SMPTE Journal, BKSTS Journal, Millimeter, On Location, and American Cinematographer.

* * *


Universalium. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • motion picture, history of the — Introduction       history of the medium from the 19th century to the present. Early years, 1830–1910 Origins       The illusion of motion pictures is based on the optical phenomena known as persistence of vision and the phi phenomenon. The first …   Universalium

  • motion picture — motion picture, adj. 1. a sequence of consecutive pictures of objects photographed in motion by a specially designed camera (motion picture camera) and thrown on a screen by a projector (motion picture projector) in such rapid succession as to… …   Universalium

  • Motion Picture Production Code — Production Code redirects here. For the television broadcasting term, see Production code number. Production Code cover …   Wikipedia

  • Motion picture film scanner — A motion picture film scanner is a device used in digital filmmaking to scan original film for storage as high resolution digital intermediate files. A film scanner scans original film stock: negative or positive print or reversal/IP. Units may… …   Wikipedia

  • Motion Picture Arts and Sciences, Academy of —  professional organization for those engaged in the production of motion pictures in the United States. Membership, which is by invitation only, is based on distinctive achievements in one of the branches of film production recognized by the… …   Universalium

  • Color motion picture film — refers both to unexposed color photographic film in a format suitable for use in a motion picture camera, and to finished motion picture film, ready for use in a projector, which bears images in color. Contents 1 Overview 2 Tinting and hand… …   Wikipedia

  • United States Motion Picture Production Code of 1930 — For the television broadcasting term, please see production code number .The Production Code (also known as the Hays Code) was the set of industry censorship guidelines governing the production of United States motion pictures. The Motion… …   Wikipedia

  • Motion interpolation — is a form of video processing in which intermediate animation frames are generated between existing ones, in an attempt to make animation more fluid. Contents 1 Applications 1.1 HDTV 1.2 Side effects 1.2.1 …   Wikipedia

  • Technology of television — The technology of television has changed since its early days using a mechanical system invented by Paul Gottlieb Nipkow in 1884.Elements of a television system , 1957 1. power switch / volume 2. brightness 3. pitch 4. vertical synchro 5.… …   Wikipedia

  • Motion simulator — Simulator seating St. Louis Zoo A motion simulator or motion platform is a mechanism that encapsulates occupants and creates the effect/feelings of being in a moving vehicle. Motion simulators fall into two categories (described below) based on… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”