optics

optics
/op"tiks/, n. (used with a sing. v.)
the branch of physical science that deals with the properties and phenomena of both visible and invisible light and with vision.
[1605-15; < ML optica < Gk optiká, n. use of neut. pl. of OPTIKÓS; see OPTIC, -ICS]

* * *

Science concerned with the production and propagation of light, the changes it undergoes and produces, and closely related phenomena.

Physical optics deals with the nature and properties of light; geometric optics deals with the formation of images by mirrors, lenses, and other devices that use light. Optical data processing involves manipulation of the information content of an image formed by coherent (one-wavelength) optical systems. The study of optics has led to the development of devices such as eyeglasses and contact lenses, telescopes, microscopes, cameras, binoculars, lasers, and optical fibres (see fibre optics).

* * *

Introduction

      science concerned with the genesis and propagation of light, the changes that it undergoes and produces, and other phenomena closely associated with it. There are two major branches of optics, physical and geometrical. Physical optics deals primarily with the nature and properties of light itself. Geometrical optics has to do with the principles that govern the image-forming properties of lenses, mirrors, and other devices that make use of light. It also includes optical data processing, which involves the manipulation of the information content of an image formed by coherent optical systems.

      Originally, the term optics was used only in relation to the eye and vision. Later, as lenses and other devices for aiding vision began to be developed, these were naturally called optical instruments, and the meaning of the term optics eventually became broadened to cover any application of light, even though the ultimate receiver is not the eye but a physical detector, such as a photographic plate or a television camera. In the 20th century optical methods came to be applied extensively to regions of the electromagnetic radiation spectrum not visible to the eye, such as X-rays, ultraviolet, infrared, and microwave radio waves, and to this extent these regions are now often included in the general field of optics.

      In the present article the image-forming properties of lenses, mirrors, and other devices that make use of light are considered. The wave and quantum nature of light, its velocity, wavelength, polarization, diffraction, and interference may be found in light. The analysis of light into its component colours by prisms and gratings forms the basis of the extensive field of spectroscopy, the principles of which are discussed in spectroscopy. For information about the reception of light by the retina of the eye and the interpretation of images by the brain, see eye: Human vision: Structure and function of the eye (eye, human).

Geometrical optics

General considerations
The optical image
      An optical image may be regarded as the apparent reproduction of an object by a lens or mirror system, employing light as a carrier. An entire image is generally produced simultaneously, as by the lens in a camera, but images may also be generated sequentially by point-by-point scanning, as in a television system or in the radio transmission of pictures across long distances in space. Nevertheless, the final detector of all images is invariably the human eye, and, whatever means is used to transmit and control the light, the final image must either be produced simultaneously or scanned so rapidly that the observer's persistence of vision will give him the mental impression of a complete image covering a finite field of view. For this to be effective the image must be repeated (as in motion pictures) or scanned (as in television) at least 40 times a second to eliminate flicker or any appearance of intermittency.

Historical background
      To the ancients, the processes of image formation were full of mystery. Indeed, for a long time there was a great discussion as to whether, in vision, something moved from the object to the eye or whether something reached out from the eye to the object. By the beginning of the 17th century, however, it was known that rays of light travel in straight lines, and in 1604 Johannes Kepler (Kepler, Johannes), a German astronomer, published a book on optics in which he postulated that an extended object could be regarded as a multitude of separate points, each point emitting rays of light in all directions. Some of these rays would enter a lens, by which they would be bent around and made to converge to a point, the “image” of the object point whence the rays originated. The lens of the eye was not different from other lenses, and it formed an image of external objects on the retina, producing the sensation of vision.

      There are two main types of image to be considered: real and virtual. A real image is formed outside the system, where the emerging rays actually cross; such an image can be caught on a screen or piece of film and is the kind of image formed by a slide projector or in a camera. A virtual image, on the other hand, is formed inside an instrument at the point where diverging rays would cross if they were extended backward into the instrument. Such an image is formed in a microscope or telescope and can be seen by looking into the eyepiece.

      Kepler's concept of an image as being formed by the crossing of rays was limited in that it took no account of possible unsharpness caused by aberrations, diffraction, or even defocussing. In 1957 the Italian physicist Vasco Ronchi went the other way and defined an image as any recognizable nonuniformity in the light distribution over a surface such as a screen or film; the sharper the image, the greater the degree of nonuniformity. Today, the concept of an image often departs from Kepler's idea that an extended object can be regarded as innumerable separate points of light, and it is sometimes more convenient to regard an image as being composed of overlapping patterns of varying frequencies and contrasts; hence, the quality of a lens can be expressed by a graph connecting the spatial frequency of a parallel line object with the contrast in the image. This concept is investigated fully under Optics and information theory (optics) below.

      Optics had progressed rapidly by the early years of the 19th century. Lenses of moderately good quality were being made for telescopes and microscopes, and in 1841 the great mathematician Carl Friedrich Gauss (Gauss, Carl Friedrich) published his classical book on geometrical optics. In it he expounded the concept of the focal length and cardinal points of a lens system and developed formulas for calculating the position and size of the image formed by a lens of given focal length. Between 1852 and 1856 Gauss's theory was extended to the calculation of the five principal aberrations of a lens (see below Lens aberrations (optics)), thus laying the foundation for the formal procedures of lens design that were used for the next 100 years. Since about 1960, however, lens design has been almost entirely computerized, and the old methods of designing lenses by hand on a desk calculator are rapidly disappearing.

      By the end of the 19th century numerous other workers had entered the field of geometrical optics, notably an English physicist, Lord Rayleigh (Rayleigh, John William Strutt, 3rd Baron) (John William Strutt), and a German physicist, Ernst Karl Abbe. It is impossible to list all their accomplishments here. Since 1940 there has been a great resurgence in optics on the basis of information and communication theory, which is treated at length below.

Light rays, waves (electromagnetic radiation), and wavelets
      A single point of light, which may be a point in an extended object, emits light in the form of a continually expanding train of waves, spherical in shape and centred about the point of light. It is, however, often much more convenient to regard an object point as emitting fans of rays, the rays being straight lines everywhere perpendicular to the waves. When the light beam is refracted by a lens or reflected by a mirror, the curvature of the waves is changed, and the angular divergence of the ray bundle is similarly changed in such a way that the rays remain everywhere perpendicular to the waves. When aberrations are present, a convergent ray bundle does not shrink to a perfect point, and the emerging waves are then not truly spherical.

      In 1690 Christiaan Huygens (Huygens, Christiaan), a Dutch scientist, postulated that a light wave progresses because each point in it becomes the centre of a little wavelet travelling outward in all directions at the speed of light, each new wave being merely the envelope of all these expanding wavelets. When the wavelets reach the region outside the outermost rays of the light beam, they destroy each other by mutual interference wherever a crest of one wavelet falls upon a trough of another wavelet. Hence, in effect, no waves or wavelets are allowed to exist outside the geometrical light beam defined by the rays. The normal destruction of one wavelet by another, which serves to restrict the light energy to the region of the rectilinear ray paths, however, breaks down when the light beam strikes an opaque edge, for the edge then cuts off some of the interfering wavelets, allowing others to exist, which diverge slightly into the shadow area. This phenomenon is called diffraction, and it gives rise to a complicated fine structure at the edges of shadows and in optical images.

The pinhole camera
      An excellent example of the working of the wavelet theory is found in the well-known pinhole camera. If the pinhole is large, the diverging geometrical pencil of rays leads to a blurred image, because each point in the object will be projected as a finite circular patch of light on the film. The spreading of the light at the boundary of a large pinhole by diffraction is slight. If the pinhole is made extremely small, however, the geometrical patch then becomes small, but the diffraction spreading is now great, leading once more to a blurred picture. There are thus two opposing effects present, and at the optimum hole size the two effects are just equal. This occurs when the hole diameter is equal to the square root of twice the wavelength (λ) times the distance (f ) between the pinhole and film—i.e., √2λ f. For f = 100 millimetres and λ = 0.0005 millimetre, the optimum hole size becomes 0.32 millimetre. This is not very exact, and a 0.4-millimetre hole would probably be just as good in practice. A pinhole, like a camera lens, can be regarded as having an f-number, which is the ratio of focal length to aperture. In this example, the f-number is 100/0.32 = 310, designated f/310. Modern camera lenses have much greater apertures, in order to achieve light-gathering power, of around f/1.2–f/5.6.

Resolution and the Airy disk
      When a well-corrected lens is used in place of a pinhole, the geometrical ray divergence is eliminated by the focussing action of the lens, and a much larger aperture may be employed; in that case the diffraction spreading becomes small indeed. The image of a point formed by a perfect lens is a minute pattern of concentric and progressively fainter rings of light surrounding a central dot, the whole structure being called the Airy disk after George Biddell Airy (Airy, Sir George Biddell), an English astronomer, who first explained the phenomenon in 1834. The Airy disk of a practical lens is small, its diameter being approximately equal to the f-number of the lens expressed in microns (0.001 millimetre). The Airy disk of an f/4.5 lens is therfore about 0.0045 millimetre in diameter (ten times the wavelength of blue light). Nevertheless, the Airy disk formed by a telescope or microscope objective can be readily seen with a bright point source of light if a sufficiently high eyepiece magnification is used.

      The finite size of the Airy disk sets an inevitable limit to the possible resolving power of a visual instrument. Rayleigh found that two adjacent and equally bright stars can just be resolved if the image of one star falls somewhere near the innermost dark ring in the Airy disk of the other star; the resolving power of a lens can therefore be regarded as about half the f-number of the lens expressed in microns. The angular resolution of a telescope is equal to the angle subtended by the least resolvable image separation at the focal length of the objective, the light-gathering lens. This works out at about four and a half seconds of arc divided by the diameter of the objective in inches.

The Rayleigh limit
      As noted above, when a perfect lens forms an image of a point source of light, the emerging wave is a sphere centred about the image point. The optical paths from all points on the wave to the image are therefore equal, so that the expanding wavelets are all in phase (vibrating in unison) when they reach the image. In an imperfect lens, however, because of the presence of aberrations, the emerging wave is not a perfect sphere, and the optical paths from the wave to the image point are then not all equal. In such a case some wavelets will reach the image as a peak, some as a trough, and there will be much destructive interference leading to the formation of a sizable patch of light, much different from the minute Airy disk characteristic of a perfectly corrected lens. In 1879 Rayleigh studied the effects of phase inequalities in a star image and came to the conclusion that an image will not be seriously degraded unless the path differences between one part of the wave and another exceed one-quarter of the wavelength of light. As this difference represents only 0.125 micron (5 × 10-6 inch), it is evident that an optical system must be designed and constructed with almost superhuman care if it is to give the best possible definition.

reflection and refraction
      The use of polished mirrors for reflecting light has been known for thousands of years, and concave mirrors have long been used to form real images of distant objects. Indeed, Isaac Newton greatly preferred the use of a mirror as a telescope objective to the poor-quality lenses available in his time. Because there is no limit to the possible size of a mirror, all large telescopes today are of this type.

      When a ray of light is reflected at a polished surface, the angle of reflection between ray and normal (the line at right angles to the surface) is exactly equal to the angle of incidence. It can be seen that a convex mirror forms a virtual image of a distant object, whereas a concave mirror forms a real image. A plane mirror forms a virtual image of near objects, as in the familiar looking glass. Plane mirrors are often used in instruments to bend a beam of light into a different direction.

The law of refraction (Snell's law)
      When a ray of light meets the surface of separation between two transparent media, it is sharply bent or refracted (refraction). Because rays are really only directions and have no physical existence, the passage of light waves through a surface must be considered if refraction is to be understood. Refraction effects are based on the fact that light travels more slowly in a denser medium. The ratio of the velocity of light in air to its velocity in the medium is called the refractive index of the medium for light of a particular colour or wavelength. The refractive index is higher for blue light than for light at the red end of the spectrum.

 In Figure 1—>, AA′ represents a plane wave of light at the instant that A′ meets the plane refracting surface AB separating two media having refractive indices n and n′, respectively. During the time taken by the light to travel from A to B in material n, light travels from A′ to B′ in material of refractive index n′, forming the new wave BB′ in the second material, proceeding in direction BC. Hence, the relationship n′/n = AB/AB ′ can be obtained; and dividing numerator and denominator by BA′ gives

      The angles I and I′ are called the angle of incidence and angle of refraction between the refracting surface and the incident and refracted waves, respectively.

      Returning now to the convention of considering the movement of light in terms of rays because entering and emerging rays are always perpendicular to the light waves they represent, angles I and I′ also denote the angles between the entering and emerging rays and the normal (perpendicular) line to the refracting surface at B.

      Equation (1)—>, known as the law of refraction, is generally written: n′ sin I′ = n sin I.

      The difference between the refractive indices of a transparent material for a specific blue light and a specific red light is known as the dispersion of the material. The usual choices of blue and red lights are the so-called “F” and “C” lines of hydrogen in the solar spectrum, named by Fraunhofer (Fraunhofer lines), with wavelengths 4861 and 6563 angstroms (the angstrom unit, abbreviated Å, is 10-8 centimetre), respectively. It is generally more significant, however, to compare the dispersion with the mean refractive index of the material for some intermediate colour such as the sodium “D” Fraunhofer line of wavelength 5893 angstroms. The dispersive power (w) of the material is then defined as the ratio of the difference between the “F” and “C” indices and the “D” index reduced by 1, or,

 Hundreds of different types of optical glass are currently available from manufacturers. These may be represented graphically on a plot of mean refractive index against dispersive power (Figure 2—>).

      At first lenses were made from selected pieces of window glass or the glass used to make blown tableware. In the early 1800s, the manufacture of clear glass that was intended specifically for lenses began in Europe. The glass was slowly stirred in the molten state to remove striations and irregularities, and then the whole mass was cooled and broken up into suitable pieces for lens making. Subsequently, the pieces were placed in molds of the approximate size of the lens, slowly remelted to shape, and carefully annealed; i.e., allowed to cool slowly under controlled conditions to reduce strains and imperfections. Various chemicals were added in the molten state to vary the properties of the glass: addition of lead oxide, for example, was found to raise both the refractive index and the dispersive power. In 1884 it was discovered that barium oxide had the effect of raising the refractive index without increasing the dispersion, a property that proved to be of the greatest value in the design of photographic lenses known as anastigmats (lenses devoid of astigmatic aberration). In 1938 a further major improvement was achieved by the use of various rare-earth elements, and since 1950 lanthanum glass has been commonly used in high-quality photographic lenses.

      The cost of optical glass varies considerably, depending on the type of glass, the precision with which the optical properties are maintained, the freedom from internal striae and strain, the number of bubbles, and the colour of the glass. Many common types of optical glass are now available in quite large pieces, but as the specifications of the glass become more stringent the cost rises and the range of available sizes becomes limited. In a small lens such as a microscope objective or a telescope eyepiece, the cost of the glass is insignificant, but in large lenses in which every millimetre of thickness may represent an additional pound in weight, the cost of the glass can be very high indeed.

      Lenses can be molded successfully of various types of plastic material, polymethyl methacrylate being the most usual. Even multi-element plastic lenses have been manufactured for low-cost cameras, the negative (concave) elements being made of a high-dispersion plastic such as styrene.

      When a ray of light emerges obliquely from glass into air, the angle of refraction between ray and normal is greater than the angle of incidence inside the glass, and at a sufficiently high obliquity the angle of refraction can actually reach 90°. In this case the emerging ray travels along the glass surface, and the sine of the angle of incidence inside the glass, known as the critical angle, is then equal to the reciprocal of the refractive index of the material. At angles of incidence greater than the critical angle, the ray never emerges, and total internal reflection occurs, for there is no measurable loss if the glass surface is perfectly clean. Dirt or dust on the surface can cause a small loss of energy by scattering some light into the air.

      Light is totally internally reflected in many types of reflecting prism and in fibre optics, in which long fibres of high-index glass clad with a thin layer of lower index glass are assembled side-by-side in precise order. The light admitted into one end of each fibre is transmitted along it without loss by thousands of successive internal reflections at the interlayer between the glass and the cladding. Hence, an image projected upon one end of the bundle will be dissected and transmitted to the other end, where it can be examined through a magnifier or photographed. Many modern medical instruments, such as cystoscopes and bronchoscopes, depend for their action on this principle. Single thick fibres (actually glass rods) are sometimes used to transmit light around corners to an otherwise inaccessible location.

Ray-tracing methods
Graphical ray tracing
      In 1621 Willebrord Snell (Snell, Willebrord van Roijen), a professor of mathematics at Leiden, discovered a simple graphical procedure for determining the direction of the refracted ray at a surface when the incident ray is given. The mathematical form of the law of refraction (Snell's law), equation (1)—> above, was announced by the French mathematician René Descartes some 16 years later.

 Snell's construction is as follows: The line AP in Figure 3A—> represents a ray incident upon a refracting surface at P, the normal at P being PN. If the incident and refracted rays are extended to intersect any line SS parallel to the normal, the lengths PQ and PR along the rays will be proportional to the refractive indices n and n′. Hence, if PQ and the indices are known, PR can be found and the refracted ray drawn in.

 A convenient modification of Snell's construction can readily be used to trace the path of a ray through a complete lens. In Figure 3B—>, the incident ray BP strikes a refracting surface at P. The normal to the surface is PC. At any convenient place on the page two concentric circles are drawn about a point O with radii proportional to the refractive indices n and n′, respectively. A line OE is now drawn parallel to the incident ray BP extending as far as the circle representing the refractive index n of the medium containing the incident ray. From E a line is drawn parallel to the normal PC extending to F on the circle representing the refractive index n′. The line OF then represents the direction of the desired refracted ray, which may be drawn in at PB′. This process is repeated successively for all the surfaces in a lens. If a mirror is involved, the reflected ray may be found by drawing the normal line EF across the circle diagram to the incident-index circle on the other side.

Trigonometrical ray tracing
 No graphical construction can possibly be adequate to determine the aberration residual of a corrected lens, and for this an accurate trigonometrical computation must be made and carried out to six or seven decimal places, the angles being determined to single seconds of arc or less. There are many procedures for calculating the path of a ray through a system of spherical refracting or reflecting surfaces, the following being typical: The diagram in Figure 4— represents a ray lying in the meridian plane, defined as the plane containing the lens axis and the object point. A ray in this plane is defined by its slope angle, U, and by the length of the perpendicular, Q, drawn from the vertex (A) of the surface on to the ray. By drawing a line parallel to the incident ray through the centre of curvature C, to divide Q into two parts at N, the relation is stated as AN = r sin U, and NM = r sin I. Hence

      From this the first ray-tracing equation can be derived,

      Applying the law of refraction, equation (2)—>, gives the second equation

      Because the angle PCA = U + I = U′ + I′, the slope of the refracted ray can be written as

      and, lastly, by adding primes to equation (2)—>,

      Having found the Q′ of the refracted ray, transfer to the next surface can be performed by

      in which d is the axial distance from the first to the second refracting surface. After performing this calculation for all the surfaces in succession, the longitudinal distance from the last surface to the intersection point of the emergent ray with the lens axis is found by

      Corresponding but much more complicated formulas are available for tracing a skew ray, that is, a ray that does not lie in the meridian plane but travels at an angle to it. After refraction at a surface, a skew ray intersects the meridian plane again at what is called the diapoint. By tracing the paths of a great many (100 or more) meridional and skew rays through a lens, with the help of an electronic computer, and plotting the assemblage of points at which all these rays pierce the focal plane after emerging from the lens, a close approximation to the appearance of a star image can be constructed, and a good idea of the expected performance of a lens can be obtained.

Paraxial, or first-order, imagery
      In a lens that has spherical aberration, the various rays from an axial object point will in general intersect the lens axis at different points after emerging into the image space. By tracing several rays entering the lens at different heights (i.e., distances from the axis) and extrapolating from a graph connecting ray height with image position, it would be possible to infer where a ray running very close to the axis (a paraxial ray) would intersect the axis, although such a ray could not be traced directly by the ordinary trigonometrical formulas because the angles would be too small for the sine table to be of any use. Because the sine of a small angle is equal to the radian measure of the angle itself, however, a paraxial ray can be traced by reducing the ray-tracing formulas to their limiting case for small angles and thus determining the paraxial intersection point directly. When this is done, writing paraxial-ray data with lowercase letters, it is found that the Q and Q′ above both become equal to the height of incidence y, and the formulas (3a)—>, (3b)—>, and (3c)—> become, in the paraxial limit:

      The longitudinal distance from the last surface to the intersection point of the emerging paraxial ray with the lens axis becomes l′ = y/u′.

      Because all paraxial rays from a given object point unite at the same image point, the resulting longitudinal distance (l′) is independent of the particular paraxial ray that is traced. Any nominal value for the height of incidence, y, may therefore be adopted, remembering that it is really an infinitesimal and y is only its relative magnitude. Thus, it is clear that the paraxial angles in equation (4)—> are really only auxiliaries, and they can be readily eliminated, giving the object–image distances for paraxial rays:

      and

magnification: the optical invariant
      It is frequently as important to determine the size of an image as it is to determine its location. To obtain an expression for the magnification—that is, the ratio of the size of an image to the size of the object—the following process may be used: If an object point B lies to one side of the lens axis at a transverse distance h from it, and the image point B′ is at a transverse distance h′, then B, B′, and the centre of curvature of the surface, C, lie on a straight line called the auxiliary axis. Then, by simple proportion,

Hence,

and the product (hnu) is invariant for all the spaces between the lens surfaces, including the object and image spaces, for any lens system of any degree of complexity. This theorem has been named after the French scientist Joseph-Louis Lagrange, although it is sometimes called the Smith-Helmholtz theorem, after Robert Smith, an English scientist, and Hermann Helmholtz, a German scientist; the product (hnu) is often known as the optical invariant. As it is easy to determine the quantities h, n, and u for the original object, it is only necessary to calculate u′ by tracing a paraxial ray in order to find the image height h′ for any lens. If the lens is used in air, as most lenses are, the refractive indices are both unity, and the magnification becomes merely m = u/u′.

The Gauss (Gauss, Carl Friedrich) theory of lenses
      In 1841 Gauss published a now famous treatise on optics in which he demonstrated that, so far as paraxial rays are concerned, a lens of any degree of complexity can be replaced by two principal, or nodal, points and two focal points, the distances from the principal points to their respective focal points being the focal lengths of the lens, and, furthermore, that the two focal lengths are equal to one another when the refractive indices of object and image spaces are equal, as when a lens is used in air.

 The principal and focal points may be defined as follows: Figure 5— shows a lens system of any construction, with a bundle of rays entering from the left in a direction parallel to the lens axis. After refraction by the lens each ray will cross the axis at some point, and the entering and emerging portions of each ray are then extended until they intersect at a point such as Q. The locus of all the points Q is a surface of revolution about the lens axis known as the equivalent refracting locus of the lens. The point where this locus crosses the axis is called the principal point, P2, and the central portion of the locus in the neighbourhood of the axis, which is virtually a plane perpendicular to the axis, is called the principal plane. The point where the emerging paraxial ray crosses the axis is called the focal point F2, the distance from P2 to F2 being the (posterior) focal length f′. A similar situation exists for a parallel beam of light entering from the right, giving the anterior principal point P1, the anterior focal point F1, and the front focal length f. For a lens in air it can be shown that the two focal lengths are equal in magnitude but opposite in direction—i.e., if F2 is to the right of P2, then F1 must lie to the left of P1, as in the case of an ordinary positive lens (one that gives a real image). In a negative lens (one that gives a virtual image), F2 lies to the left of P2, and the posterior focal length f′ is negative.

      The relation between the distances of object and image from a lens can be easily stated if the positions of the two principal points and the two focal points are known. (In using these expressions, distances are considered positive or negative depending on whether they are measured to the right or to the left from their respective origins.) For a lens in air: (a) If the conjugate distances measured from the respective focal points are x and x′, and if m is the image magnification (height of image divided by height of object), then m = -x′/f′ = f′/x and xx′ = -f2. (b) If the conjugate distances measured from the respective principal points are p and p′ and if m is the image magnification, then m = p′/p and 1/p′ = 1/p + 1/f′. The Lagrange equation (7)—> requires modification for a distant object because in that case the object height h is infinite, and the slope angle u is zero. If the off-axis distance h is divided by the object distance L, and u is multiplied by L, equation (7)—> becomes h′ = (n/n′)f′ϕ, in which ϕ is the angle in radians subtended by the distant object at the lens. This formula provides a means for defining focal length and for measuring the focal length of an unknown lens.

The thin lens
      In a thin lens such as a spectacle (eyeglasses), the two principal planes coincide within the lens, and then the conjugate distances p and p′ in the formula above become the distances of object and image from the lens itself.

      The focal length of a thin lens can be computed by applying the surface-conjugate formula (6)—> to the two surfaces in succession, writing the l of the first surface as infinity and the l of the second surface equal to the l′ of the first surface. When this is done, the lens power (P) becomes

      Because the refractive index of glass varies with wavelength, every property of a lens that depends on its refractive index also varies with wavelength, including the focal length, the image distance, and the image magnification. The change of image distance with wavelength is known as chromatic aberration, and the variation of magnification with wavelength is known as chromatic difference of magnification, or lateral colour. Chromatic aberration can be eliminated by combining a strong lens of low-dispersion glass (crown) with a weaker lens made of high-dispersion (flint) glass. Such a combination is said to be achromatic. This method of removing chromatic aberration was discovered in 1729 by Chester Hall (Hall, Chester Moor), an English inventor, and it was exploited vigorously in the late 18th century in numerous small telescopes. Chromatic variation of magnification can be eliminated by achromatizing all the components of a system or by making the system symmetrical about a central diaphragm. Both chromatic aberration and lateral colour are corrected in every high-grade optical system.

Longitudinal magnification
      If an object is moved through a short distance δp along the axis, then the corresponding image shift δp′ is related to the object movement by the longitudinal magnification (OVRmXOVR). Succinctly,

      in which m is the lateral magnification. The fact that the longitudinal magnification is equal to the square of the transverse magnification means that m is always positive; hence, if the object is moved from left to right, the image must also move from left to right. Also, if m is large, then OVRmXOVR is very large, which explains why the depth of field (δp) of a microscope is extremely small. On the other hand, if m is small, less than one as in a camera, then OVRmXOVR is very small, and all objects within a considerable range of distances (δp) appear substantially in focus.

Image of a tilted plane
      If a lens is used to form an image of a plane object that is tilted relative to the lens axis, then the image will also be tilted in such a way that the plane of the object, the plane of the image, and the median plane of the lens all meet. This construction can be derived by the use of the lateral and longitudinal magnification relations just established above. With a tilted object the magnification at any point is given by the ratio of the distances of image and object from the lens at that point in the image, and, consequently, m varies progressively from one end of the image to the other. This arrangement is frequently used in view cameras equipped with “swings” to increase depth of field and in enlargers to rectify the convergence of parallel lines caused by tilting the camera, for example, in photographing tall buildings. The rule finds extensive application in photogrammetry and in the making of maps from aerial photographs.

Optical systems
System components
      An optical system consists of a succession of elements, which may include lenses, mirrors, light sources, detectors, projection screens, reflecting prisms, dispersing devices, filters and thin films, and fibre-optics bundles.

Lenses
      All optical systems have an aperture stop somewhere in the system to limit the diameter of the beams of light passing through the system from an object point. By analogy with the human eye, this limiting aperture stop is called the iris of the system, its images in the object and image spaces being called the entrance pupil and exit pupil, respectively. In most photographic lenses the iris is inside the objective, and it is often adjustable in diameter to control the image illumination and the depth of field. In telescope and microscope systems the cylindrical mount of the objective lens is generally the limiting aperture or iris of the system; its image, formed behind the eyepiece where the observer's eye must be located to see the whole area being observed, called the field, is then the exit pupil.

      The pupils of a lens system can be regarded as the common bases of oblique beams passing through the system from all points in an extended object. In most systems, however, the mounts of some of the lens elements cut into the oblique beams and prevent the beams from being perfectly circular, and the pupils are then not fully filled with light. This effect is known as vignetting and leads to a reduction in illumination in the outer parts of the field of view.

  A common feature of many optical systems is a relay lens, which may be introduced to invert an image or to extend the length of the system, as in a military periscope. An example of the use of a relay lens is found in the common rifle sight shown diagrammatically in Figure 6—>. Here the front lens A is the objective, forming an inverted image of the target on the cross wire or reticle at B. The light then proceeds to the relay lens C, which forms a second image, now erect, at D. Beyond this image is the eyepiece E to render the light parallel so that the image may be seen sharply by the observer. Unfortunately, the oblique beam from the objective will usually miss the relay lens, and so a field lens must be inserted at or near the first image B to bend the oblique beams around and redirect them toward the relay lens. The power of the field lens is chosen so that it will form an image of the objective lens aperture on the relay lens aperture. The iris and entrance pupil of this system coincide at the objective; there is an internal pupil at the relay lens, and the exit pupil lies beyond the eyepiece as shown in Figure 6—>.

Brian J. Thompson Ed.

Mirrors (mirror)
      Mirrors are frequently used in optical systems. Plane mirrors may be employed to bend a beam of light in another direction, either for convenience or to yield an image reversed left for right if required. Curved mirrors, concave and convex, may be used in place of lenses as image-forming elements in reflecting telescopes. All of the world's largest telescopes and many small ones are of the reflecting type. Such telescopes use a concave mirror to produce the main image, a small secondary mirror often being added to magnify the image and to place it in a convenient position for observation or photography. Telescope mirrors are commonly made parabolic or hyperbolic in section to correct the aberrations of the image. Originally telescope mirrors were made from polished “speculum metal,” an alloy of copper and tin, but in 1856 Justus von Liebig (Liebig, Justus, Freiherr von), a German chemist, invented a process for forming a mirror-like layer of silver on polished glass, which was applied to telescope mirrors by the German astronomer C.A. von Steinheil (Steinheil, Karl August). Today most mirrors are made of glass, coated with either a chemically deposited silver layer or more often one made by depositing vaporized aluminum on the surface. The aluminum surface is as highly reflective as silver and does not tarnish as readily.

      A large astronomical mirror presents many problems to the optical engineer, mainly because even a distortion of a few microns of the mirror under its own weight will cause an intolerable blurring of the image. Though many schemes for supporting a mirror without strain have been tried, including one to support it on a bag of compressed air, the problem of completely eliminating mirror distortion remains unsolved. A metal mirror, if well ribbed on the back, may be lighter than a glass mirror and therefore easier to handle, but most metals are slightly flexible and require just as careful support as glass mirrors. Since temperature changes can also cause serious distortion in a mirror, astronomers try to hold observatory temperatures as constant as possible.

Light sources
      Many types of optical instruments form images by natural light, but some, such as microscopes and projectors, require a source of artificial light. Tungsten filament lamps are the most common, but if a very bright source is required, a carbon or xenon arc is employed. For some applications, mercury or other gas discharge tubes are used; a laser beam is often employed in scientific applications. Laser light is brilliant, monochromatic, collimated (the rays are parallel), and coherent (the waves are all in step with each other), any or all of these properties being of value in particular cases.

Detectors
      The image formed by an optical system is usually received by the eye, which is a remarkably adaptable and sensitive detector of radiation within the visible region of the electromagnetic spectrum. A photographic film, another widely used detector, has the advantage of yielding a permanent record of events. Since about 1925 many types of electrical detectors of radiation, both within the visible region and beyond it, have been developed. These include photoelectric cells of various kinds in which either a voltage or a resistance is modified by light falling on the device. Many new types of detectors are sensitive far into the infrared spectrum and are used to detect the heat radiated by a flame or other hot object. A number of image intensifiers and converters, particularly for X-ray or infrared radiation, which have appeared since World War II, embody a radiation detector at one end of a vacuum tube and an electron lens inside the tube to relay the image on to a phosphor screen at the other end. This arrangement produces a visible picture that may be observed by eye or photographed to make a permanent record.

      Television camera tubes detect real images by electronic scanning, the picture on the viewing tube being a replica of the image in the original camera. The combined application of electronics and optics has become common. An extreme example of electro-optics appears in some space cameras, in which the film is exposed, processed, and then scanned by a tiny point of light; the light passing through the film is picked up by a photocell and transmitted to Earth by radio, where it is made to control the brightness of another point of light scanning a second piece of film in exact synchronism with the scanning spot in the camera. The whole system thus produces a picture on Earth that is an exact replica of the picture photographed in space a few minutes earlier.

Projection screens (projection screen)
      The simplest screen for the projection of slides or motion pictures is, of course, a matte white surface, which may be on a hard base as in outdoor theatres or on a stretched cloth indoors. A theatre screen is often perforated to transmit sound from loudspeakers placed behind it.

      Improved screen materials have been developed to increase the brightness of the picture to suit the particular shape of the auditorium. A screen covered with tiny beads tends to send the light back in the general direction of the projector, and is suitable for use at one end of a long, narrow auditorium. Another type of screen is covered with fine embossed vertical grooves; this tends to distribute the light in a horizontal band across the audience with little or no vertical spread. A real advantage of these highly reflective screens is that they tend to reflect ambient room light away from the viewer as by a mirror, so that the pictures appear almost as bright and clear by day as in a darkened room.

Reflecting prisms (prism)
      Reflecting prisms are pieces of glass bounded by plane surfaces set at carefully specified angles. Some of these surfaces transmit light, some reflect light, while some serve both functions in succession. A prism is thus an assembly of plane reflectors at relatively fixed angles, which are traversed in succession by a beam of light.

 The simplest prism is a triangular block of glass with two faces at right angles and one at an angle of 45°. The face at 45° deflects a beam of light through a right angle. The common Porro prism used in a pair of binoculars (binocular) contains four 45° reflecting surfaces, two to reverse the beam direction in the vertical plane and two in the horizontal plane (Figure 7)—>. These reflecting faces could be replaced by pieces of mirror mounted on a metal frame, but it is hard to hold mirrors rigidly and harder still to keep them clean. Some microscopes are equipped with a 45° deflection prism behind the eyepiece; this prism may provide two or three reflections depending on the type of image inversion or left-for-right reversal required.

      Prisms containing a semireflecting, semitransmitting surface are known as beam splitters and as such have many uses. An important application is found in some colour television cameras, in which the light from the lens is divided by two beam splitters in succession to form red, green, and blue images on the faces of three image tubes in the camera.

Dispersing devices
      There are two forms of dispersing element used to spread out the constituent colours of a beam of light into a “spectrum,” namely a prism and a grating (diffraction grating). The prism, known to Newton, is the older; it separates the colours of the spectrum because the refractive index of the glass is lowest for red light and progressively increases through the yellow and green to the blue, where it is highest. Prism spectroscopes and spectrographs are made in a variety of forms and sizes, but in all cases the blue end of the spectrum is greatly spread out while the red end is relatively compressed.

      A diffraction grating is a ruled mirror or transparent plate of glass having many thousands of fine parallel grooves to the inch. It separates the colours of the spectrum by a process of diffraction. Each groove diffracts, or scatters, light in all directions, and in the case of light of one particular wavelength, there will be one direction in which the light wave from one groove lags behind the light wave from the next groove by precisely one or more whole wavelengths. This results in a strong beam of diffracted light in that direction and darkness in all other directions. Since each spectral colour corresponds to a different wavelength, the grating spreads out the spectrum into a fan where it can be observed or photographed. The red rays are bent most and the blue rays least, the opposite of the situation with a prism.

      Although a prism or grating is the essential dispersing element in a spectrograph, a fine slit and additional lenses or focussing mirrors must be used to form a sharply defined spectrum. Prism spectroscopes are, of course, limited to those wavelengths for which the prism material is transparent; a reflecting grating can be used for any wavelength that the material will reflect.

Filters and thin films
      A colour filter is a sheet of transparent material that modifies a light beam by selective absorption of some colours in relation to others. A neutral filter absorbs all wavelengths equally and merely serves to reduce the intensity of a beam of light without changing its colour.

      Filters may be made from sheets of coloured glass, plastic, or dyed gelatin, and in some cases glass cells filled with liquid have been used. Since World War II, another type of filter depending on the interference of light has been developed in which one or more metallic or other types of films of controlled thickness have been deposited on a glass plate, the layers being so thin as to cause selective interference of some wavelengths in relation to others and thus act as a nonabsorbing filter. In this case the rejected colours are reflected instead of being absorbed.

      Polarizing filters have the property of transmitting light that vibrates in one direction while absorbing light that vibrates in a perpendicular direction. These filters are used extensively in scientific instruments. In sunglasses and when placed over a camera lens, polarizing filters reduce unwanted reflections from nonmetallic surfaces. Polarizing spectacles have been used to separate the left-eye and right-eye beams in the projection of stereoscopic pictures or movies.

Fibre-optics bundles
      As noted earlier, a thin rod or fibre of glass or other transparent material transmits light by repeated internal reflections, even when the rod is somewhat curved. An ordered bundle of rods or fibres (fibre optics) is thus capable of taking an image projected upon one end of the bundle and reproducing it at the other end. A fibre-optics bundle can be fused together into a rigid channel, or it may be left flexible, only the ends being rigidly fastened together. Because a fibre bundle is exceedingly delicate, it must be handled with care; breaking a fibre would cause a black dot to appear in the reproduced image.

Rudolf Kingslake

Nonclassical imaging systems
      Besides the familiar optical systems cited above, there are many nonclassical optical elements that are used to a limited extent for special purposes. The most familiar of these is the aspheric (nonspherical) surface. Because plane and spherical surfaces are the easiest to generate accurately on glass, most lenses contain only such surfaces. It is occasionally necessary, however, to use some other axially symmetric surface on a lens or mirror, generally to correct a particular aberration. An example is the parabolic surface used for the primary mirror of a large astronomical telescope; another is the elliptic surface molded on the front of the little solid glass reflector units used on highway signs.

      Another commonly used optical surface is the side of a cylinder. Such surfaces have power only in the meridian perpendicular to the cylinder axis. Cylindrical lenses are therefore used wherever it is desired to vary the magnification from one meridian to a perpendicular meridian. Cylindrical surfaces are employed in the anamorphic lenses used in some wide-screen motion-picture systems to compress the image horizontally in the camera and stretch it back to its original shape in the projected image.

      To correct astigmatism in the eye, many spectacles are made with toric surfaces—i.e., with a stronger curvature in one meridian than in the perpendicular meridian, like the bowl of a teaspoon. These surfaces are generated and polished by special machines and are made by the million every year.

      Another nonclassical optical system is the bifocal or trifocal spectacle lens. They are made either by forming two or three separate surfaces on a single piece of glass or obtaining additional power by fusing a piece of high-index glass on to the front of the main lens and then polishing a single spherical surface over both glasses.

      Two French scientists, Georges-Louis Buffon and Augustin-Jean Fresnel, in the 18th century suggested forming a lens in concentric rings to save weight, each ring being a portion of what would normally be a continuous spherical surface but flattened out. On a large scale, Fresnel lenses (Fresnel lens) have been used in lighthouses, floodlights, and traffic signals, and as cylindrical ship's lanterns. With fine steps a few thousandths of an inch wide, molded plastic Fresnel lenses are often used as condensers in overhead projectors and in cameras as a field lens in contact with a ground-glass viewing screen.

      Lenses have occasionally been made with one surface taking the form of a flattened cone. Such lenses produce a long, linear image of a point source, lying along the lens axis; for this reason they are commonly referred to as axicons. They have been used to produce a straight line of light in space for aligning machines and shafting, but since about 1965 the beam from a gas laser has generally been used instead.

Lens aberrations (aberration)
Seidel sums
      If a lens were perfect and the object were a single point of monochromatic light, then, as noted above, the light wave emerging from the lens would be a portion of a sphere centred about the ideal image point, lying in the paraxial image plane at a height above the axis given by the Lagrange theorem. In practice, however, this condition is most unlikely to occur; it is much more probable that the emerging wave will depart slightly from a perfect sphere, the departure varying from point to point over the lens aperture. This departure is extremely small, being of the order of the wavelength of light that is only half a micron, so it would be impossible to show this departure on a drawing. It can be represented mathematically, however, in the following way: The coordinates of a point in the exit-pupil aperture will be represented by x0 and y0, the y0 coordinate lying in the meridian plane containing the object point and the lens axis. The departure of the wave from the ideal sphere is generally called OPD, meaning optical path difference. It can be shown that OPD is related to x0 and y0 by five constants S1 through S5, and the quantity ho,

Each of these five terms is considered to be a separate “aberration,” the coefficients S1, . . . S5, being called Seidel sums after the 19th-century German scientist L.P. Seidel, who identified the imperfections. These aberrations are respectively spherical, coma, astigmatism, Petzval field curvature, and distortion. The symbol h0 refers to the height of the final image point above the lens axis, and hence it defines the obliquity of the beam.

      The five Seidel sums can be calculated by tracing a paraxial ray from object to image through the lens and by tracing also a paraxial principal ray from the centre of the aperture stop outward in both directions toward the object and image, respectively. The angle of incidence i and the ray slope angle u of each of these paraxial rays at each surface are then listed and inserted into the following expressions for the five sums. The angle u0 represents the final emerging slope of the paraxial ray.

      The calculation starts by determining the radius A of the exit pupil by A = √(x02 + y02) and also the quantity K at each surface by

      The corresponding Kpr for the paraxial principal ray is also determined at each surface. Then, the five aberrations may be written

      To interpret these aberrations, the simplest procedure is to find the components x′, y′ of the displacement of a ray from the Lagrangian image point in the paraxial focal plane, by differentiating the OPD expression given above. The partial derivatives ∂OPD/∂x0 and ∂OPD/∂y0 represent respectively the components of the slope of the wave relative to the reference sphere at any particular point (x0, y0). Hence, because a ray is always perpendicular to the wave, the ray displacements in the focal plane can be found by

      in which f is the focal length of the lens. The aggregation of rays striking the focal plane will indicate the kind of image that is characteristic of each aberration.

      This procedure will be applied to each of the five aberration terms separately, assuming that all the other aberrations are absent. Obviously, in a perfect lens x′ and y′ are zero because OPD is zero. It must be remembered, however, that by using rays instead of waves, all fine-structure effects caused by diffraction will be lost, and only the macroscopic image structure will be retained.

      The first term in the OPD expression is OPD = S1(x02 + y02)2. Hence

 These displacements can both be eliminated simultaneously by applying a longitudinal shift L to the focal plane. This changes x′ by -Lx0/f and y′ by -Ly0/f; hence, if L is made equal to 4 f 2A2S1, both ray displacements vanish. The aberration, therefore, represents a condition in which each zone of the lens has a different focus along the axis, the shift of focus from the paraxial image being proportional to A2. This is known as spherical aberration (see Figure 8—>).

      The S2 term in the OPD expression represents the aberration called coma, in which the image of a point has the appearance of a comet. The x′ and y′ components are as follows:

 When this aberration is present, each circular zone of the lens forms a small ringlike image in the focal plane, the rings formed by successive concentric zones of the lens fitting into two straight envelope lines at 60° to each other (Figure 8—>). Because the brightness of this image is greatest at the tip, coma tends to form a one-sided haze on images in the outer parts of the field.

      If only the S3 term is present, then

 
For any one zone of the lens, x′ and y′ describe a vertical ellipse with major axis three times the minor axis. The images formed by all the smaller zones of the lens fit into this ellipse and fill it out with a uniform intensity of light. If the image plane is moved along the axis by a distance L, as in focussing a camera, then, at L = 2f 2h02S3, the ellipse shrinks to a radial focal line (R). Twice this displacement yields a circle; three times this L gives a tangential focal line (T), which is followed by an ellipse with its major axis in the x direction, as in Figure 8—>, bottom. The usual effect of astigmatism in an image is the appearance of radial or tangential blurring in the outer parts of the field.

Petzval curvature
      For the S4 term taken alone,

      The image of a point is now a small circle that contracts to a point at a new focus situated at a longitudinal distance L = 2f 2h02S4 from the paraxial image. As the longitudinal displacement of the focus is proportional to the square of the image height h0′, this aberration represents a pure field curvature without any accompanying loss of definition (all lines remain sharp). It is named after the Hungarian mathematician József Petzval, who studied its properties in the early 1840s. The effect of Petzval curvature can be somewhat offset by the deliberate introduction of sufficient overcorrected astigmatism, as was done in all the pre-anastigmat photographic objectives. This added astigmatism is, of course, undesirable, and in order to design an anastigmat lens having a flat field free from astigmatism, it is necessary to reduce the Petzval sum S4 drastically.

      For a succession of thin lenses (1, 2, 3, . . . etc.) in a system, the Petzval sum becomes simply 1/f1n1 + 1/f2n2 + 1/f3n3 + . . . etc., in which f is the focal length of each element and n is its refractive index. Therefore, to reduce the sum and minimize this aberration, relatively strong negative elements of low-index glass can be combined with positive elements of high-index glass. The positive and negative elements must be axially separated to provide the lens with a useful amount of positive power. The introduction of high-index barium crown glass with a low dispersive power in the 1880s initiated the development of anastigmat lenses.

      For the S5 aberration,

      When this aberration is present, the entire image point is displaced toward or away from the axis by an amount proportional to the third power of the transverse distance h0′ of the image from the axis. This leads to the formation of an image of a square that is either a barrel-shaped or a cushion-shaped figure.

 It is to be noted that the five Seidel aberrations represent the largest and most conspicuous defects that can arise in an uncorrected optical system. Even in the best lenses in which these five aberrations have been perfectly corrected for one zone of the lens and for one point in the field, however, there will exist small residuals of these aberrations and of many other higher order aberrations also, which are significantly different from the classical types just described. The typical aberration figures shown in Figure 8—> are, of course, grossly exaggerated, and actually it requires some magnification of a star image to render these appearances clearly visible. Nevertheless, they are important enough to require drastic reduction in high-quality lenses intended to make sharp negatives capable of considerable enlargement.

General relations
      All photometric concepts are based on the idea of a standard candle, lamps having accurately known candle power being obtainable from the various national standards laboratories. The ratio of the candle power of a source to its area is called the luminance of the source; luminances range from about 2,000 candles per square millimetre at the surface of the Sun down to about 3 × 10-6 candle per square centimetre (3 × 10-6 stilb) for the luminous paint on a watch dial. Ordinary outdoor scenes in daylight have an average luminance of several hundred candles per square foot. The quantity of light flux flowing out from a source is measured in lumens (lumen), the lumen being defined as the amount of flux radiated by a small “point” source of one candle power into a cone having a solid angle of one steradian. When light falls upon a surface it produces illumination (i.e., illuminance), the usual measure of illuminance being the foot-candle, which is one lumen falling on each square foot of receiving surface.

      It is often important to be able to calculate the brightness of an image formed by an optical system, because photographic emulsions and other light receptors cannot respond satisfactorily if the light level is too low. The problem is to relate the luminance of an object with the illuminance in the image, knowing the transmittance and aperture of the optical system. A small area A of a plane object having a luminance of B candles per square unit will have a normal intensity of AB candles. This source radiates light into a cone of semi-angle U, limited, for example, by the rim of a lens. The light flux (F) entering the cone can be found by integration to be

      If the object luminance is expressed as BL lamberts (lambert), the lambert being an alternative luminance unit equal to 1/π (i.e., 0.32) candle per unit area, the flux (F) is

      because there are π times as many lamberts in a given luminance as there are candles per unit area.

      A fraction t of this flux finds its way to the image, t being the lens transmittance, generally about 0.8 or 0.9 but less if a mirror is involved. The area of the image is Am2, in which m, the magnification, is given by

      Hence, the image illuminance (E) is

      The image illuminance thus depends only on the luminance of the source and the cone angle of the beam proceeding from the lens to the image. This is a basic and most important relation underlying all calculations of image illuminance.

      It is often more convenient to convert the angle U′ into other better known quantities, such as the f-number of the lens and the image magnification. The relation here is

      The -number (relative aperture) of the lens is defined as the ratio of the focal length to the diameter of the entrance pupil; m is the image magnification; and mp is the pupil magnification—i.e., the diameter of the exit pupil divided by the diameter of the entrance pupil. Combining equations (8)—> and (9)—> gives

      As an example in the use of this relation, if it is supposed that an f/2 lens is being used to project an image of a cathode-ray tube at five times magnification, the tube luminance being 5,000 foot-lamberts (1.7 candles per square centimetre), the lens transmittance is 0.8, and the pupil magnification is unity. Then the image illuminance will be

      The image is very much less bright than the object, a fact that becomes clear to anyone attempting to provide a bright projected image in a large auditorium.

Distribution of illumination over an image
      So far only the illumination at the centre of an image has been considered, but the distribution of illumination over a wide field is often important. In the absence of any lens, the small plane source already considered radiates in a direction inclined at an angle ϕ to the axis with an intensity AB cos ϕ. This light has to travel farther than the axial light to reach a screen, and then it strikes the screen at another angle ϕ. The net result is that the oblique illumination on the screen is smaller than the axial illumination by the factor cos4 ϕ.

      The same law can be applied to determine the oblique illumination due to a lens, assuming a uniform extended diffusing source of light on the other side of the lens. In this case, however, the exit pupil will not in general be a perfect circle because of possible distortion of the iris by that part of the optical system lying between the iris and the image. Also, any mechanical vignetting in the lens will make the aperture noncircular and reduce still further the oblique illumination. In a camera this reduction in oblique illumination results in darkened corners of the picture, but, if the reduction in brightness is gradual, it is not likely to be detected because the eye adapts quickly to changing brightness as the eyes scan over the picture area. Indeed, a 50 percent drop in brightness between the centre and corners of an ordinary picture is scarcely detectable.

Visual brightness
      The apparent brightness of things seen by the eye follows the same laws as any other imaging system, because the apparent brightness is measured by the illuminance in the image that is projected on the retina. The angle U′ in equation (8)—> inside the eye is determined by the size of the pupil of the eye, which varies from about one millimetre to about eight millimetres, depending on the brightness of the environment. Apart from this variation, retinal illuminance is directly proportional to object luminance, and objects having the same luminance appear equally bright, no matter at what distance they are observed.

      From this argument, it is clear that no visual instrument, such as a telescope, can possibly make anything appear brighter than when viewed directly. To be sure, a telescope having a large objective lens accepts more light from an object in proportion to the area of the lens aperture, but it magnifies the image area in the same proportion; so the increased light is spread over an increased area of the retina, and the illuminance remains unchanged. Actually, the telescopic view is always dimmer than the direct view because of light losses in the telescope due to glass absorption and surface reflections and because the exit pupil of the telescope may be smaller than the pupil of the eye, thus reducing the angle U′.

      The case of a star being observed through a telescope is quite different, because no degree of magnification can possibly make a star appear as anything other than a point of light. Hence, star images appear brighter in proportion to the area of the telescope objective (assuming that the exit pupil is larger than the eye pupil), and the visibility of a star against the sky background is thus improved in proportion to the square of the diameter of the telescope objective lens.

Rudolf Kingslake

General observations
      A new era in optics commenced in the early 1950s following the impact of certain branches of electrical engineering—most notably communication and information theory. This impetus was sustained by the development of the laser in the 1960s.

      The initial tie between optics and communication theory came because of the numerous analogies that exist between the two subjects and because of the similar mathematical techniques employed to formally describe the behaviour of electrical circuits and optical systems. A topic of considerable concern since the invention of the lens as an optical imaging device has always been the description of the optical system that forms the image; information about the object is relayed and presented as an image. Clearly, the optical system can be considered a communication channel and can be analyzed as such. There is a linear relationship (i.e., direct proportionality) between the intensity distribution in the image plane and that existing in the object, when the object is illuminated with incoherent light (e.g., sunlight or light from a large thermal source). Hence, the linear theory developed for the description of electronic systems can be applied to optical image-forming systems. For example, an electronic circuit can be characterized by its impulse response—that is, its output for a brief impulse input of current or voltage. Analogously, an optical system can be characterized by an impulse response that for an incoherent imaging system is the intensity distribution in the image of a point source of light; the optical impulse is a spatial rather than a temporal impulse—otherwise the concept is the same. Once the appropriate impulse response function is known, the output of that system for any object intensity distribution can be determined by a linear superposition of impulse responses suitably weighted by the value of the intensity at each point in the object. For a continuous object intensity distribution this sum becomes an integral. While this example has been given in terms of an optical imaging system, which is certainly the most common use of optical elements, the concept can be used independent of whether the receiving plane is an image plane or not. Hence, for example, an impulse response can be defined for an optical system that is deliberately defocussed or for systems used for the display of Fresnel or Fraunhofer diffraction patterns. (Fraunhofer diffraction occurs when the light source and diffraction patterns are effectively at infinite distances from the diffracting system, and Fresnel diffraction occurs when one or both of the distances are finite.)

Temporal frequency response
      A fundamentally related but different method of describing the performance of an electronic circuit is by means of its temporal frequency response. A plot is made of the response for a series of input signals of a variety of frequencies. The response is measured as the ratio of the amplitude of the signal obtained out of the system to that put in. If there is no loss in the system, then the frequency response is unity (one) for that frequency; if a particular frequency fails to pass through the system, then the response is zero. Again, analogously the optical system may also be described by defining a spatial frequency response. The object, then, to be imaged by the optical system consists of a spatial distribution of intensity of a single spatial frequency—an object the intensity of which varies as (1 + a cos ωx), in which x is the spatial coordinate, a is a constant called the contrast, and ω is a variable that determines the physical spacing of the peaks in the intensity distribution. The image is recorded for a fixed value of a and ω and the contrast in the image measured. The ratio of this contrast to a is the response for this particular spatial frequency defined by ω. Now if ω is varied and the measurement is repeated, a frequency response is then obtained.

Nonlinear optical systems
      The analogies described above go even further. Many optical systems are nonlinear, just as many electronic systems are nonlinear. Photographic film is a nonlinear optical element in that equal increments of light energy reaching the film do not always produce equal increments of density on the film.

      A different type of nonlinearity occurs in image formation. When an object such as two stars is imaged, the resultant intensity distribution in the image is determined by first finding the intensity distribution formed by each star. These distributions must then be added together in regions where they overlap to give the final intensity distribution that is the image. This example is typical of an incoherent imaging system—i.e., the light emanating from the two stars is completely uncorrelated. This occurs because there is no fixed phase relationship between the light emanating from the two stars over any finite time interval.

      A similar nonlinearity arises in objects illuminated by light from the Sun or other thermal light source. Illumination of this kind, when there is no fixed relationship between the phase of the light at any pair of points in the incident beam, is said to be incoherent illumination. If the illumination of the object is coherent, however, then there is a fixed relationship between the phase of the light at all pairs of points in the incident beam. To determine the resultant image intensity under this condition for a two point object requires that the amplitude and phase of the light in the image of each point be determined. The resultant amplitude and phase is then found by summation in regions of overlap. The square of this resultant amplitude is the intensity distribution in the image. Such a system is nonlinear. The mathematics of nonlinear systems was developed as a branch of communication theory, but many of the results can be used to describe nonlinear optical systems.

      This new description of optical systems was extremely important to, but would not alone account for, the resurgence of optical research and development. This new approach resulted in the development of whole new branches of study, including optical processing and holography (see below Optical processing (optics) and Holography (optics)). It also had an effect, together with the development of digital computers, on the concepts and versatility of lens design and testing. Finally, the invention of the laser, a device that produces coherent radiation, and the development and implementation of the theory of partially coherent light gave the added impetus necessary to change traditional optics into a radically new and exciting subject.

Image formation
Impulse response
      An optical system that employs incoherent illumination of the object can usually be regarded as a linear system in intensity. A system is linear if the addition of inputs produces an addition of corresponding outputs. For ease of analysis, systems are often considered stationary (or invariant). This property implies that if the location of the input is changed, then the only effect is to change the location of the output but not its actual distribution. With these concepts it is then only necessary to find an expression for the image of a point input to develop a theory of image formation. The intensity distribution in the image of a point object can be determined by solving the equation relating to the diffraction of light as it propagates from the point object to the lens, through the lens, and then finally to the image plane. The result of this process is that the image intensity is the intensity in the Fraunhofer diffraction pattern of the lens aperture function (that is, the square of the Fourier transform of the lens aperture function; a Fourier transform is an integral equation involving periodic components). This intensity distribution is the intensity impulse response (sometimes called point spread function) of the optical system and fully characterizes that optical system.

      With the knowledge of the impulse response, the image of a known object intensity distribution can be calculated. If the object consists of two points, then in the image plane the intensity impulse response function must be located at the image points and then a sum of these intensity distributions made. The sum is the final image intensity. If the two points are closer together than the half width of the impulse response, they will not be resolved. For an object consisting of an array of isolated points, a similar procedure is followed—each impulse response is, of course, multiplied by a constant equal to the value of the intensity of the appropriate point object. Normally, an object will consist of a continuous distribution of intensity, and, instead of a simple sum, a convolution integral results.

Transfer function
      The concept of the transfer function of an optical system can be approached in several ways. Formally and fundamentally it is the Fourier transform of the intensity impulse response. Because the impulse response is related to the lens aperture function, so is the transfer function. In particular, the transfer function can be obtained from a knowledge of the aperture function by taking the function and plotting the resultant overlapping areas as the aperture function is slid over itself (i.e., the autocorrelation of the aperture function).

      Conceptually, however, the transfer function is best understood by considering the object intensity distribution to be a linear sum of cosine functions of the form (1 + a cos 2πμx), in which a is the amplitude of each component of spatial frequency μ. The image of a cosine intensity distribution is a cosine of the same frequency; only the contrast and phase of the cosine can be affected by a linear system. The image of the above object intensity distribution can be represented by [1 + b cos (2πμx + ϕ)], in which b is the amplitude of the output cosine of frequency μ and ϕ is the phase shift. The transfer function, τ(μ), for that frequency is then given by the ratio of the amplitudes:

      If μ is now varied, the spatial frequency response of the system is measured by determining τ(μ) for the various values of μ. It should be noted that τ(μ) is in general complex (containing a term with √(−1) ).

      The transfer function, like the impulse response, fully characterizes the optical system. To make use of the transfer function to determine the image of a given object requires that the object be decomposed into a series of periodic components called its spatial frequency spectrum. Each term in this series must then be multiplied by the appropriate value of the transfer function to determine the individual components of the series that is the spatial frequency spectrum of the image—a transformation of this series will give the image intensity. Thus, any components in the object spectrum that have a frequency for which τ(μ) is zero will be eliminated from the image.

Partially coherent light
Development and examples of the theory
      Image formation is concerned above with incoherent object illumination, which results in an image formed by the addition of intensities. The study of diffraction and interference, on the other hand, requires coherent illumination of the diffracting object, the resulting diffracted optical field being determined by an addition of complex amplitudes of the wave disturbances. Thus, two different mechanisms exist for the addition of light beams, depending upon whether the beams are coherent or incoherent with respect to each other. Unfortunately, this is not the whole story; it is not sufficient to consider only the two situations of strictly coherent and strictly incoherent light. In fact, strictly incoherent fields are only approximately obtainable in practice. Furthermore, the possibility of intermediate states of coherence cannot be ignored; it is necessary to describe the result of mixing incoherent light with coherent light. It was to answer the question How coherent is a beam of light? (or the equivalent one, How incoherent is a beam of light?) that the theory of partial coherence was developed. Marcel Verdet, a French physicist, realized in the 19th century that even sunlight is not completely incoherent, and two objects separated by distances of over approximately 1/20 millimetre will produce interference effects. The eye, operating unaided in sunlight, does not resolve this separation distance and hence can be considered to be receiving an incoherent field. Two physicists, Armand Fizeau (Fizeau, Armand-Hippolyte-Louis) in France and Albert Michelson (Michelson, A.A.) in the United States, were also aware that the optical field produced by a star is not completely incoherent, and hence they were able to design interferometers to measure the diameter of stars from a measurement of the partial coherence of the starlight. These early workers did not think in terms of partially coherent light, however, but derived their results by an integration over the source. At the other extreme, the output from a laser can produce a highly coherent field.

      The concepts of partially coherent light can best be understood by means of some simple experiments. A circular uniform distant source produces illumination on the front of an opaque screen containing two small circular apertures, the separation of which can be varied. A lens is located behind this screen, and the resultant intensity distribution in its focal plane is obtained. With either aperture open alone, the intensity distribution observed is such that it is readily associated with the diffraction pattern of the aperture, and it may thus be concluded that the field is coherent over the dimensions of the aperture. When the two apertures are opened together and are at their closest separation, two-beam interference fringes are observed that are formed by the division of the incident wave front by the two apertures. As the separation of the apertures increases, the observed interference fringes get weaker and finally disappear, only to reappear faintly as the separation is further increased. As the separation of the apertures is increased, these results show that (1) the fringe spacing decreases; (2) the intensities of the fringe minima are never zero; (3) the relative intensity of the maxima above the minima steadily decreases; (4) the absolute value of the intensity of the maxima decreases and that of the minima increases; (5) eventually, the fringes disappear, at which point the resultant intensity is just twice the intensity observed with one aperture alone (essentially an incoherent addition); (6) the fringes reappear with a further increase in separation of the aperture, but the fringes contain a central minimum, not a central maximum.

      If the intensities of the two apertures are equal, then the results (1) through (5) can be summarized by defining a quantity in terms of the maximum intensity (Imax) and the minimum intensity (Imin), called the visibility (V) of the fringes—i.e., V = (Imax - Imin)/(Imax + Imin). The maximum value of the visibility is unity, for which the light passing through one aperture is coherent with respect to the light passing through the other aperture; when the visibility is zero, the light passing through one aperture is incoherent with respect to the light passing through the other aperture. For intermediate values of V the light is said to be partially coherent. The visibility is not a completely satisfactory description because it is, by definition, a positive quantity and cannot, therefore, include a description of item (6) above. Furthermore, it can be shown by a related experiment that the visibility of the fringes can be varied by adding an extra optical path between the two interfering beams.

The mutual coherence function
      The key function in the theory of partially coherent light is the mutual coherence function Γ12(τ) = Γ(x1,x2,τ), a complex quantity, which is the time averaged value of the cross correlation function of the light at the two aperture points x1 and x2 with a time delay τ (relating to a path difference to the point of observation of the interference fringes). The function can be normalized (i.e., its absolute value set equal to unity at τ = 0 and x1 = x2) by dividing by the square root of the product of the intensities at the points x1 and x2 to give the complex degree of coherence, hence

      The modulus of γ12(τ) has a maximum value of unity and a minimum value of zero. The visibility defined earlier is identical to the modulus of the complex degree of coherence if I (x1) = I (x2).

      Often the optical field can be considered to be quasimonochromatic (approximately monochromatic), and then the time delay can be set equal to zero in the above expression, thus defining the mutual intensity function. It is often convenient to describe an optical field in terms of its spatial and temporal coherence by artificially separating out the space- and time-dependent parts of the coherence function. Temporal coherence effects arise from the finite spectral width of the source radiation; a coherence time Δt can be defined as 1/Δν, in which Δν is the frequency bandwidth. A related coherence length Δl can also be defined as c/Δν = λ2/Δλ2, in which c is the velocity of light, λ is the wavelength, and Δλ the wavelength bandwidth. Providing that the path differences in the beams to be added are less than this characteristic length, the beams will interfere.

      The term spatial coherence is used to describe partial coherence arising from the finite size of an incoherent source. Hence, for the equipath position for the addition of two beams, a coherence interval is defined as the separation of two points such that the absolute value |γ12(0)| is some prechosen value, usually zero.

      The mutual coherence function is an observable quantity that can be related to the intensity of the field. The partially coherent field can be propagated by use of the mutual coherence function in a similar way to the solution of diffraction problems by propagation of the complex amplitude. The effects of partially coherent fields are clearly of importance in the description of normally coherent phenomena, such as diffraction and interference, but also in the analysis of normally incoherent phenomena, such as image formation. It is notable that image formation in coherent (coherence) light is not linear in intensity but is linear in the complex amplitude of the field, and in partially coherent light the process is linear in the mutual coherence.

Optical processing
Coherent optical systems
      Optical processing, information processing, signal processing, and pattern recognition are all names that relate to the process of spatial frequency filtering in a coherent imaging system—specifically, a method in which the Fraunhofer diffraction pattern (equivalently the spatial frequency spectrum or the Fourier transform) of a given input is produced optically and then operated upon to change the information content of the optical image of that input in a predetermined way.

 The idea of using coherent optical systems to allow for the manipulation of the information content of the image is not entirely new. The basic ideas are essentially included in Abbe (Abbe, Ernst)'s theory of vision in a microscope first published in 1873; the subsequent illustrative experiments of this theory, notably by Albert B. Porter in 1906, are certainly simple examples of optical processing.

      Abbe's ideas can be interpreted as a realization that image formation in a microscope is more correctly described as a coherent image-forming process than as the more familiar incoherent process. Thus, the coherent light illuminating the object on the microscope stage would be diffracted by that object. To form an image, this diffracted light must be collected by the objective lens of the microscope, and the nature of the image and the resolution would be affected by how much of the diffracted light is collected. As an example, an object may be considered consisting of a periodic variation in amplitude transmittance—the light diffracted by this object will exist in a series of discrete directions (or orders of diffraction). This series of orders contains a zero order propagating along the optical axis and a symmetric set of orders on both sides of this zero order. Abbe correctly discerned what would happen as the microscope objective accepted different combinations of these orders. For example, if the zero order and one first order are collected, then the information obtained will be that the object consisted of a periodic distribution, but the spatial location of the periodic structure is not correctly ascertained. If the other first order of diffracted light is included, the correct spatial location of the periodic structure is also obtained. As more orders are included, the image more closely resembles the object.

      Coherent optical data processing became a serious subject for study in the 1950s, partly because of the work of a French physicist, Pierre-Michel Duffieux, on the Fourier integral and its application to optics, and the subsequent use of communication theory in optical research. The work was initiated in France by André Maréchal and Paul Croce, and today a variety of problems can be attempted by the technique. These include removal of raster lines (as in a TV picture) and halftone dots (as in newspaper illustration); contrast enhancement; edge sharpening; enhancement of a periodic or isolated signal in the presence of additive noise; aberration balancing in which a recorded aberrated image can be somewhat improved; spectrum analysis; cross correlation of data; matched and inverse filtering in which a bright spot of light in the image indicates the presence of a particular object.

Filtering
 The basic system required for coherent optical processing consists of two lenses (Figure 9—>). A collimated beam of coherent light is used to transilluminate the object. The first lens produces the characteristic Fraunhofer diffraction pattern of the object, which is the spatial frequency distribution associated with the object. (Mathematically, it is the Fourier transform of the object amplitude distribution.) A filter that consists of amplitude (density) or phase (optical path) variations, or both, is placed in the plane of the diffraction pattern. The light passing through this filter is used to form an image, this step being accomplished by the second lens. The filter has the effect of changing the nature of the image by altering the spatial frequency spectrum in a controlled way so as to enhance certain aspects of the object information. Maréchal gave the descriptive title double diffraction to this type of two-lens system.

 The filters can be conveniently grouped into a variety of types depending upon their action. Blocking filters have regions of complete transparency and other regions of complete opacity. The opaque areas completely remove certain portions of the spatial frequency spectrum of the object. The removal of raster lines and halftone dots is accomplished with this type of filter. The object can be considered as a periodic function the envelope of which is the scene or picture—or equivalently the periodic function samples the picture. The diffraction pattern consists of a periodic distribution with a periodicity reciprocally related to the raster periodicity. Centred at each of these periodic locations is the diffraction pattern of the scene. Hence, if the filter is an aperture centred at one of these locations so that only one of the periodic elements is allowed to pass, then the raster periodicity is removed, but the scene information is retained (see Figure 9—>). The problem of the removal of halftone dots is the two-dimensional equivalent of the above process. Because the two-dimensional spatial frequency spectrum of an object is displayed in a coherent optical processing system, it is possible to separate out information by means of its orientation. Other applications of blocking filters include band-pass filters, which again have a direct relationship to the band-pass filters in electronic circuits.

      A second type of filter is an amplitude filter that will consist of a continuous density variation. These filters can be produced to achieve the enhancement of contrast of the object input or the differentiation of the object. They are often constructed by controlled exposure of photographic film or evaporation of metal onto a transparent substrate.

      Certain optical processing techniques require that the phase of the optical field be changed, and, hence, a filter with no absorption but varying optical thickness is required. Usually, both the amplitude and the phase have to be modified, however, thus requiring a complex filter. In simple cases the amplitude and phase portions can be made separately, the phase filter being manufactured by using an evaporated layer of transparent material, such as magnesium fluoride. Current practice is to fabricate the complex filter by an interferometric method in which the required complex amplitude function is recorded as a hologram (see below Holography (optics)).

 The phase-contrast microscope can be considered to be an example of an optical processing system, and the concepts understood by reference to Figure 9—>. Only the simplest form will be considered here. The spatial frequency spectrum of the phase object is formed and the phase of the central portion of that spectrum changed by π/2 or 3π/2 to produce positive or negative phase contrast, respectively. To improve the contrast of the image an additional filter covering the same area as the phase filter is used that is partially absorbing (i.e., an amplitude filter). The restriction on this process is that the variations of the phase ϕ(x) are small so that eiϕ(x) ≅ 1 + iϕ(x). With incoherent light, phase information is not visible, but many biological samples consist only of variations of refractive index, which results in optical path and hence phase, differences. The image in the phase-contrast microscope is such that the intensity in that image relates linearly to, and hence is a display of, the phase information in the object—e.g., I(x) ∝ 1 ± 2ϕ(x) for positive and negative phase contrast, respectively.

      One of the important motivations for the study of optical processing methods is to achieve some correction of aberrated images. Considerable technological advantage can be gained if photographs taken with an aberrated optical system in incoherent light can be corrected by subsequent processing. Within definable limits this can be accomplished, but the impulse response or the transfer function of the aberrated system must be known. The recorded image intensity distribution is the convolution of the object intensity with the intensity impulse response of the aberrated system. This record is the input to the coherent optical processing system; the diffraction pattern formed in this system is the product of the spatial frequency spectrum of the object and the transfer function of the aberrated system. Conceptually, the filter has to be the inverse of the transfer function in order to balance out its effect. The final image would then ideally be an image of the object intensity distribution. It is critical, however, that the transfer function has a finite value over only a limited frequency range, and only those frequencies that are recorded by the original aberrated system can be present in the processed image. Hence, for these spatial frequencies that were recorded, some processing can be carried out to get a flatter effective transfer function; both the contrast and the phase of the spatial frequency spectrum may have to be changed because the transfer function is, in general, a complex function. Prime examples are for images aberrated by astigmatism, defocussing, or image motion.

      Holography is a two-step coherent image-forming process in which an intermediate record is made of the complex optical field associated with the object. The invention of the wave-front reconstruction process (now called holography) was first described in 1948 by Dennis Gabor (Gabor, Dennis), a Hungarian-born physicist, with a specific application in mind—to attempt to improve the resolution of images formed with electron beams. The technique has, however, had most of its success to date when light beams (laser) are employed particularly in the visible part of the spectrum. The first step in the process is to record (often on high-resolution film) the interference pattern produced by the interaction of the light diffracted by the object of interest and a coherent background or reference wave. In the second step, this record, which is the hologram, is illuminated coherently to form an image of the original object. In fact, two images are usually formed—a real image (often called the conjugate image) and a virtual image (often called the primary image). There are two basic concepts that underlie this process: first, the addition of a coherent background (or reference) beam. Two optical fields may be considered, the complex amplitudes of which vary as the cosine of an angle proportional to the space coordinate and as the modulus (absolute magnitude) of the cosine of the angle, respectively. From a measurement of the intensity of these fields it is impossible to distinguish them because both vary as the cosine squared of the space coordinate. If a second coherent optical field is added to each of these two fields, however, then the resultant fields become (1 + cos x) and (1 + |cos x|), respectively. The measured intensities are now different, and the actual fields can be determined by taking the square root of the intensity. The amplitude transmittance of a photographic record is, in fact, the square root of the original intensity distribution that exposed the film. In a more general sense, an optical field of the form a(x) exp [iϕ1(x)], in which a(x) is the amplitude and ϕ1(x) is the phase, can be distinguished from a field a(x) exp [iϕ2(x)] by adding a coherent background; the phases ϕ1(x) and ϕ2(x) are then contained as cosine variations of intensity in the resulting pattern. Hence, the problem of recording the phase information of the optical field is circumvented. When the hologram is illuminated, however, the optical field that originally existed in that plane is recreated. To apply the second basic concept—that of an image-forming property—it is necessary to determine what the hologram of a point object is—in actuality it is a sine-wave zone plate or zone lens. If a collimated beam of light is used to illuminate a zone lens, then two beams are produced; the first comes to a real focus, and the other is a divergent beam that appears to have come from a virtual focus. (By comparison, the more classical zone plate has a multitude of real and virtual focuses, and a real lens has but one.) When the object is other than a point, the zone lens is modified by the diffraction pattern of the object; i.e., each point on the object produces its own zone lens, and the resultant hologram is a summation of such zone lenses.

  In Gabor's original system the hologram was a record of the interference between the light diffracted by the object and a collinear background. This automatically restricts the process to that class of objects that have considerable areas that are transparent (see Figure 10A—>). When the hologram is used to form an image, twin images are formed, as illustrated in Figure 10B—>. The light associated with these images is propagating in the same direction, and hence in the plane of one image light from the other image appears as an out-of-focus component. This type of hologram is usually referred to as an in-line Fresnel hologram because it is the pattern of the object that interferes with the collinear coherent background. The deleterious effects of the second image can be minimized if the hologram is made in the far field of the object so that it is a Fraunhofer diffraction pattern of the object that is involved. This latter technique has found significant application in microscopy, particularly in the measurement of small particles, and in electron microscopy.

      A more versatile method of recording the hologram is to add a second beam of light as a reference wave to produce the hologram. The hologram is now the record of the interference pattern produced by the light diffracted by the object and this separate reference wave. The reference wave is usually introduced at an angle to the diffracted beam, hence this method is often called off-axis (or side-band) holography. When the hologram is illuminated, the image-forming beams do not propagate in the same direction but are inclined to each other with an angle twice that between the diffracted beam and the original reference beam. Hence, the light associated with an image is completely separated from the other image.

 A further technique that has some value and relates to the earlier discussion of optical processing is the production of the so-called generalized or Fourier transform hologram. Here the reference beam is added coherently to a Fraunhofer diffraction pattern of the object or formed by a lens (as in the first stage of Figure 9—>).

      The process described so far has been in terms of transmitted light through the object. The methods involving the separate reference beam can be used in reflected light, and the virtual (primary) image produced from the hologram has all the properties of an ordinary image in terms of three-dimensionality and parallax. Normally, a recorded image is only a two-dimensional representation of the object. Full-colour holograms can be recorded by essentially recording three holograms simultaneously—one in red light, one in blue, and one in green.

Applications

Image-forming
      The applications mentioned here are in three groups: image-forming applications, non-image-forming applications, and the hologram as an optical element. It is notable that all three groups relate to the basic use of the process rather than specific holographic techniques. The first group involves those applications using image formation when, for a variety of reasons, normal incoherent or coherent image formation is not satisfactory. It is not sufficient merely to replace a normal image process by a holographic technique unless there is some significant gain—i.e., the required record can be obtained more easily or more accurately. Applications that fall into this category are holographic microscopy; particle-size analysis; high-speed photography of various types, particularly of gas flows; data storage and retrieval, including displays; image formation through a random medium; and non-optical holography, particularly acoustic holography.

Non-image-forming
      The second group of interest involves those applications that are not image-forming. One of the very real and exciting applications of holography is to the nondestructive testing of fabricated materials. An interesting example of this method is for the testing of tires for the detection of flaws (debonds) that exist between the plies of the tire. The realm of interferometry is thus extended to whole new classes of objects. In a similar but separate development, interference microscopy has been used successfully.

Optical elements
      The third and final group involves those applications that use the hologram as an optical element in its own right. This includes the building of accurate, specialized gratings and the application of holographic filters in coherent optical data processing.

      Holography has been adapted to the conventional microscope, which is modified by the inclusion of a separate reference beam so that the light diffracted by the object in the microscope is made to interfere with the light from the reference beam. An increase in the depth of field available is achieved by this type of recording process. The image is produced when the hologram is illuminated again by a coherent beam.

      The application of holography to particle-size analysis (e.g., to determine the size distribution of dust and liquid droplets) was really the first of the modern-day applications. In a sense, this, too, can be thought of as microscopy. The principles of Fraunhofer holography were developed to solve this particular problem. Because the particles are in motion, a hologram must be made instantaneously. A pulsed-ruby laser technique is therefore used. The hologram is formed between the light diffracted by the particles or droplets and the coherent background light that passes directly through the sample. In reconstruction, a series of stationary images are formed that can be examined at leisure. Hence, a transient event has been transformed into a stationary image for evaluation.

      Data storage and retrieval is perhaps one of the more important applications of holography, which is in the process of development and refinement. Because the information about the image is not localized, it cannot be affected by scratches or dust particles. Recent advances in materials, particularly those that might be erasable and reusable, have added further interest in holographic optical memories.

      Among the non-image-forming applications are interferometry, interference microscopy, and optical processing. Holographic interferometry can be done in several ways. The basic technique involves recording a hologram of the object of interest and then interfering the image produced from this hologram with the coherently illuminated object itself. A variation on this technique would be to form two holograms at different times of the same object as it undergoes testing. The two holograms can then be used together to form two images, which would again interfere. The interference fringes seen would be related to the changes in the object between the two exposures. A third technique uses a time-average hologram, which is particularly applicable to the study of vibrating objects.

      There are two applications that come under the heading holographic optical elements—the use of holographic gratings and the use of holographic filters for coherent optical data processing.

Nonlinear optics
      Nonlinear effects in optics are now quite readily observable using the highly coherent and highly energetic laser beams. These effects occur when the output of a system is not linearly related to the input (e.g., a nonlinear electronic amplifier can be built with a gain that increases with signal intensity). The most important nonlinear effect is probably frequency doubling. Optical radiation of a given frequency is propagated through a crystalline material and interacts with that material to produce an output of a different frequency that is twice the input frequency. For example, the 10,600-angstrom infrared output of a neodymium laser can, under suitable conditions, be converted into green light at 5300 angstroms in a crystal of barium strontium niobate.

Brian J. Thompson

Additional Reading
There are many journals and hundreds of books covering the general field of optics; some of the more familiar books include Arthur C. Hardy and Fred H. Perrin, The Principles of Optics (1932); and Francis A. Jenkins and Harvey E. White, Fundamentals of Optics, 4th ed. (1976). At a more advanced level are R.S. Longhurst, Geometrical and Physical Optics, 3rd ed. (1973); L.C. Martin, Technical Optics, 2 vol. (1960–61); Max Born and Emil Wolf, Principles of Optics, 6th ed. (1980, reissued 1993); and Eugene Hecht and Alfred Zajac, Optics, 2nd ed. (1987). In the purely geometric field, the following can be recommended: W.T. Welford, Geometrical Optics (1962), which is fairly elementary and provides an excellent introduction to the subject; and L.C. Martin, Geometrical Optics (1955), which is somewhat more advanced. Other useful works include Applied Optics and Optical Engineering (irregular), with volumes treating optical devices, optical theory, and the principal types of optical instruments; and A.E. Conrady, Applied Optics and Optical Design, part 1 (1929, reissued 1992), and part 2, ed. and completed by Rudolf Kingslake (1960, reissued 1992), a full description of the theory and practice of lens design, providing detailed instructions for the design of many types of optical systems. Walter T. Welford, Useful Optics (1991), is a succinct review of principles basic to implementing optical tools.Recommended books on the subject of information theory are Edward L. O'Neill, Introduction to Statistical Optics (1963, reissued 1992); Joseph W. Goodman, Introduction to Fourier Optics (1968, reissued 1988); Arnold Roy Shulman, Optical Data Processing (1970); Kendall Preston, Coherent Optical Computers (1972); H. Lipson (ed.), Optical Transforms (1972); Ajoy Ghatak and K. Thyagarajan, Optical Electronics (1989); George W. Stroke, An Introduction to Coherent Optics and Holography, 2nd ed. (1969); John B. DeVelis and George O. Reynolds, Theory and Applications of Holography (1967); Robert J. Collier, Christoph B. Burckhardt, and Lawrence H. Lin, Optical Holography (1971); and Robert Jones and Catherine Wykes, Holographic and Speckle Interferometry, 2nd ed. (1989). Topics of current research may be found in the periodicals Journal of the Optical Society of America (monthly); Applied Optics (semimonthly); and Progress in Optics (annual).Rudolf Kingslake Brian J. Thompson Ed.

* * *


Universalium. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • OPTICS — (Ordering Points To Identify the Clustering Structure, etwa ‚Punkte ordnen um die Clusterstruktur zu identifizieren‘) ist ein dichtebasierter Algorithmus zur Clusteranalyse. Er wurde von Mihael Ankerst, Markus M. Breunig, Hans Peter Kriegel und… …   Deutsch Wikipedia

  • Optics — Студийный альбом I:Scintilla …   Википедия

  • optics — UK US /ˈɒptɪks/ noun [U] COMMUNICATIONS, IT ► the study of light or the use of light in technical instruments: »The phone uses advanced optics to provide a sharp display …   Financial and business terms

  • Optics — Op tics, n. [Cf. F. optique, L. optice, Gr. ? (sc. ?). See {Optic}.] That branch of physical science which treats of the nature and properties of light, the laws of its modification by opaque and transparent bodies, and the phenomena of vision.… …   The Collaborative International Dictionary of English

  • optics — (n.) science of sight and light, 1570s, from OPTIC (Cf. optic); also see ICS (Cf. ics) …   Etymology dictionary

  • optics — ► PLURAL NOUN (usu. treated as sing. ) ▪ the branch of science concerned with vision and the behaviour of light …   English terms dictionary

  • optics — [äp′tiks] n. [< OPTIC] the branch of physics dealing with the nature and properties of light and vision …   English World dictionary

  • Optics — For the book by Sir Isaac Newton, see Opticks. Optical redirects here. For the musical artist, see Optical (artist). Optics includes study of dispersion of light. Optics is the branch of …   Wikipedia

  • optics — Synonyms and related words: Newtonian physics, acoustics, actinology, actinometry, aerophysics, applied physics, astrophysics, basic conductor physics, biophysics, catoptrics, chemical physics, cryogenics, crystallography, cytophysics, dioptrics …   Moby Thesaurus

  • optics — /ˈɒptɪks/ (say optiks) noun 1. the branch of physical science that deals with the properties and phenomena of light and with vision. –phrase 2. the optics of …, the overall impression made by …: the optics of the last week have not been good for… …  

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”