physical science, principles of

physical science, principles of

Introduction

      the procedures and concepts employed by those who study the inorganic world.

       physical science, like all the natural sciences, is concerned with describing and relating to one another those experiences of the surrounding world that are shared by different observers and whose description can be agreed upon. One of its principal fields, physics, deals with the most general properties of matter, such as the behaviour of bodies under the influence of forces, and with the origins of those forces. In the discussion of this question, the mass and shape of a body are the only properties that play a significant role, its composition often being irrelevant. Physics, however, does not focus solely on the gross mechanical behaviour of bodies but shares with chemistry the goal of understanding how the arrangement of individual atoms into molecules and larger assemblies confers particular properties. Moreover, the atom itself may be analyzed into its more basic constituents and their interactions.

      The present opinion, rather generally held by physicists, is that these fundamental particles and forces, treated quantitatively by the methods of quantum mechanics, can reveal in detail the behaviour of all material objects. This is not to say that everything can be deduced mathematically from a small number of fundamental principles, since the complexity of real things defeats the power of mathematics or of the largest computers. Nevertheless, whenever it has been found possible to calculate the relationship between an observed property of a body and its deeper structure, no evidence has ever emerged to suggest that the more complex objects, even living organisms, require that special new principles be invoked, at least so long as only matter, and not mind, is in question. The physical scientist thus has two very different roles to play: on the one hand, he has to reveal the most basic constituents and the laws that govern them; and, on the other, he must discover techniques for elucidating the peculiar features that arise from complexity of structure without having recourse each time to the fundamentals.

      This modern view of a unified science, embracing fundamental particles, everyday phenomena, and the vastness of the Cosmos, is a synthesis of originally independent disciplines, many of which grew out of useful arts. The extraction and refining of metals, the occult manipulations of alchemists, and the astrological interests of priests and politicians all played a part in initiating systematic studies that expanded in scope until their mutual relationships became clear, giving rise to what is customarily recognized as modern physical science.

      For a survey of the major fields of physical science and their development, see the articles physical science and Earth sciences.

The development of quantitative science
      Modern physical science is characteristically concerned with numbers—the measurement of quantities and the discovery of the exact relationship between different measurements. Yet this activity would be no more than the compiling of a catalog of facts unless an underlying recognition of uniformities and correlations enabled the investigator to choose what to measure out of an infinite range of choices available. Proverbs purporting to predict weather are relics of science prehistory and constitute evidence of a general belief that the weather is, to a certain degree, subject to rules of behaviour. Modern scientific weather forecasting attempts to refine these rules and relate them to more fundamental physical laws so that measurements of temperature, pressure, and wind velocity at a large number of stations can be assembled into a detailed model of the atmosphere whose subsequent evolution can be predicted—not by any means perfectly but almost always more reliably than was previously possible.

      Between proverbial weather lore and scientific meteorology lies a wealth of observations that have been classified and roughly systematized into the natural history of the subject—for example, prevailing winds at certain seasons, more or less predictable warm spells such as Indian summer, and correlation between Himalayan snowfall and intensity of monsoon. In every branch of science this preliminary search for regularities is an almost essential background to serious quantitative work, and in what follows it will be taken for granted as having been carried out.

      Compared to the caprices of weather, the movements of the stars and planets exhibit almost perfect regularity, and so the study of the heavens (celestial mechanics) became quantitative at a very early date, as evidenced by the oldest records from China and Babylon. Objective recording and analysis of these motions, when stripped of the astrological interpretations that may have motivated them, represent the beginning of scientific astronomy. The heliocentric (heliocentric system) planetary model (c. 1510) of the Polish astronomer Nicolaus Copernicus (Copernican system), which replaced the Ptolemaic geocentric model, and the precise description of the elliptical orbits of the planets (1609) by the German astronomer Johannes Kepler, based on the inspired interpretation of centuries of patient observation that had culminated in the work of Tycho Brahe of Denmark, may be regarded fairly as the first great achievements of modern quantitative science.

      A distinction may be drawn between an observational science like astronomy, where the phenomena studied lie entirely outside the control of the observer, and an experimental science such as mechanics or optics, where the investigator sets up the arrangement to his own taste. In the hands of Isaac Newton not only was the study of colours put on a rigorous basis but a firm link also was forged between the experimental science of mechanics and observational astronomy by virtue of his law of universal gravitation and his explanation of Kepler's laws of planetary motion. Before proceeding as far as this, however, attention must be paid to the mechanical studies of Galileo Galilei (Galileo), the most important of the founding fathers of modern physics, insofar as the central procedure of his work involved the application of mathematical deduction to the results of measurement.

Examples of the scientific method
      It is nowadays taken for granted by scientists that every measurement is subject to error so that repetitions of apparently the same experiment give different results. In the intellectual climate of Galileo's time, however, when logical syllogisms that admitted no gray area between right and wrong were the accepted means of deducing conclusions, his novel procedures were far from compelling. In judging his work one must remember that the conventions now accepted in reporting scientific results were adopted long after Galileo's time. Thus if, as is said, he stated as a fact that two objects dropped from the leaning tower of Pisa reached the ground together with not so much as a hand's breadth between them, it need not be inferred that he performed the experiment himself or that, if he did, the result was quite so perfect. Some such experiment had indeed been performed a little earlier (1586) by the Flemish mathematician Simon Stevin, but Galileo idealized the result. A light ball and a heavy ball do not reach the ground together, nor is the difference between them always the same, for it is impossible to reproduce the ideal of dropping them exactly at the same instant. Nevertheless, Galileo was satisfied that it came closer to the truth to say that they fell together than that there was a significant difference between their rates. This idealization of imperfect experiments remains an essential scientific process, though nowadays it is considered proper to present (or at least have available for scrutiny) the primary observations, so that others may judge independently whether they are prepared to accept the author's conclusion as to what would have been observed in an ideally conducted experiment.

      The principles may be illustrated by repeating, with the advantage of modern instruments, an experiment such as Galileo himself performed—namely, that of measuring the time taken by a ball to roll different distances down a gently inclined channel. The following account is of a real experiment designed to show in a very simple example how the process of idealization proceeds, and how the preliminary conclusions may then be subjected to more searching test.

      Lines equally spaced at six centimetres (2.4 inches) were scribed on a brass channel, and the ball was held at rest beside the highest line by means of a card. An electronic timer was started at the instant the card was removed, and the timer was stopped as the ball passed one of the other lines. Seven repetitions of each timing showed that the measurements typically spread over a range of 1/20 of a second, presumably because of human limitations. In such a case, where a measurement is subject to random error, the average of many repetitions gives an improved estimate of what the result would be if the source of random error were eliminated; the factor by which the estimate is improved is roughly the square root of the number of measurements. Moreover, the theory of errors attributable to the German mathematician Carl Friedrich Gauss allows one to make a quantitative estimate of the reliability of the result, as expressed in the table by the conventional symbol ±. This does not mean that the first result in column 2 is guaranteed to lie between 0.671 and 0.685 but that, if this determination of the average of seven measurements were to be repeated many times, about two-thirds of the determinations would lie within these limits.

   The representation of measurements by a graph, as in Figure 1—>, was not available to Galileo but was developed shortly after his time as a consequence of the work of the French mathematician-philosopher René Descartes. The points appear to lie close to a parabola, and the curve that is drawn is defined by the equation x = 12t2. The fit is not quite perfect, and it is worth trying to find a better formula. Since the operations of starting the timer when the card is removed to allow the ball to roll and stopping it as the ball passes a mark are different, there is a possibility that, in addition to random timing (time) errors, a systematic error appears in each measured value of t; that is to say, each measurement t is perhaps to be interpreted as t + t0, where t0 is an as-yet-unknown constant timing error. If this is so, one might look to see whether the measured times were related to distance not by x = at2, where a is a constant, but by x = a(t + t0)2. This may also be tested graphically by first rewriting the equation as √x = √a(t + t0), which states that when the values of √x are plotted against measured values of t they should lie on a straight line. Figure 2—> verifies this prediction rather closely; the line does not pass through the origin but rather cuts the horizontal axis at -0.09 second. From this, one deduces that t0 = 0.09 second and that (t + 0.09)x should be the same for all the pairs of measurements given in the accompanying table—> . The third column shows that this is certainly the case. Indeed, the constancy is better than might have been expected in view of the estimated errors. This must be regarded as a statistical accident; it does not imply any greater assurance in the correctness of the formula than if the figures in the last column had ranged, as they might very well have done, between 0.311 and 0.315. One would be surprised if a repetition of the whole experiment again yielded so nearly constant a result.

      A possible conclusion, then, is that for some reason—probably observational bias—the measured times underestimate by 0.09 second the real time t it takes a ball, starting from rest, to travel a distance x. If so, under ideal conditions x would be strictly proportional to t2. Further experiments, in which the channel is set at different but still gentle slopes, suggest that the general rule takes the form x = at2, with a proportional to the slope. This tentative idealization of the experimental measurements may need to be modified, or even discarded, in the light of further experiments. Now that it has been cast into mathematical form, however, it can be analyzed mathematically to reveal what consequences it implies. Also, this will suggest ways of testing it more searchingly.

 From a graph such as Figure 1—>, which shows how x depends on t, one may deduce the instantaneous speed of the ball at any instant. This is the slope of the tangent drawn to the curve at the chosen value of t; at t = 0.6 second, for example, the tangent as drawn describes how x would be related to t for a ball moving at a constant speed of about 14 centimetres per second. The lower slope before this instant and the higher slope afterward indicate that the ball is steadily accelerating. One could draw tangents at various values of t and come to the conclusion that the instantaneous speed was roughly proportional to the time that had elapsed since the ball began to roll. This procedure, with its inevitable inaccuracies, is rendered unnecessary by applying elementary calculus to the supposed formula. The instantaneous speed v is the derivative of x with respect to t; if

      The implication that the velocity is strictly proportional to elapsed time is that a graph of v against t would be a straight line through the origin. On any graph of these quantities, whether straight or not, the slope of the tangent at any point shows how velocity is changing with time at that instant; this is the instantaneous acceleration f. For a straight-line graph of v against t, the slope and therefore the acceleration are the same at all times. Expressed mathematically, f = dv/dt = d2x/dt2; in the present case, f takes the constant value 2a.

 The preliminary conclusion, then, is that a ball rolling down a straight slope experiences constant acceleration and that the magnitude of the acceleration is proportional to the slope. It is now possible to test the validity of the conclusion by finding what it predicts for a different experimental arrangement. If possible, an experiment is set up that allows more accurate measurements than those leading to the preliminary inference. Such a test is provided by a ball rolling in a curved channel so that its centre traces out a circular arc of radius r, as in Figure 3—>. Provided the arc is shallow, the slope at a distance x from its lowest point is very close to x/r, so that acceleration of the ball toward the lowest point is proportional to x/r. Introducing c to represent the constant of proportionality, this is written as a differential equation

 Here it is stated that, on a graph showing how x varies with t, the curvature d2x/dt2 is proportional to x and has the opposite sign, as illustrated in Figure 4—>. As the graph crosses the axis, x and therefore the curvature are zero, and the line is locally straight. This graph represents the oscillations of the ball between extremes of ±A after it has been released from x = A at t = 0. The solution of the differential equation of which the diagram is the graphic representation is

      where ω, called the angular frequency, is written for √(c/r). The ball takes time T = 2π/ω = 2π√(r/c) to return to its original position of rest, after which the oscillation is repeated indefinitely or until friction brings the ball to rest.

      According to this analysis, the period, T, is independent of the amplitude of the oscillation, and this rather unexpected prediction is one that may be stringently tested. Instead of letting the ball roll on a curved channel, the same path is more easily and exactly realized by making it the bob of a simple pendulum. To test that the period is independent of amplitude two pendulums may be made as nearly identical as possible, so that they keep in step when swinging with the same amplitude. They are then swung with different amplitudes. It requires considerable care to detect any difference in period unless one amplitude is large, when the period is slightly longer. An observation that very nearly agrees with prediction, but not quite, does not necessarily show the initial supposition to be mistaken. In this case, the differential equation that predicted exact constancy of period was itself an approximation. When it is reformulated with the true expression for the slope replacing x/r, the solution (which involves quite heavy mathematics) shows a variation of period with amplitude that has been rigorously verified. Far from being discredited, the tentative assumption has emerged with enhanced support.

      Galileo's law of acceleration, the physical basis of the expression 2π√(r/c) for the period, is further strengthened by finding that T varies directly as the square root of ri.e., the length of the pendulum.

      In addition, such measurements allow the value of the constant c to be determined with a high degree of precision, and it is found to coincide with the acceleration g of a freely falling body. In fact, the formula for the period of small oscillations of a simple pendulum of length r, T = 2π√(r/g), is at the heart of some of the most precise methods for measuring g. This would not have happened unless the scientific community had accepted Galileo's description of the ideal behaviour and did not expect to be shaken in its belief by small deviations, so long as they could be understood as reflecting inevitable random discrepancies between the ideal and its experimental realization. The development of quantum mechanics in the first quarter of the 20th century was stimulated by the reluctant acceptance that this description systematically failed when applied to objects of atomic size. In this case, it was not a question, as with the variations of period, of translating the physical ideas into mathematics more precisely; the whole physical basis needed radical revision. Yet, the earlier ideas were not thrown out—they had been found to work well in far too many applications to be discarded. What emerged was a clearer understanding of the circumstances in which their absolute validity could safely be assumed.

Criterion for scientific theory
      The experiments just described in detail as examples of scientific method were successful in that they agreed with expectation. They would have been just as successful if, in spite of being well conducted, they had disagreed because they would have revealed an error in the primary assumptions. The philosopher Karl Popper's (Popper, Sir Karl) widely accepted criterion for a scientific theory is that it must not simply pass such experimental tests as may be applied but that it must be formulated in such a way that falsification is in principle possible. For all its value as a test of scientific pretensions, however, it must not be supposed that the experimenter normally proceeds with Popper's criterion in mind. Normally he hopes to convince himself that his initial conception is correct. If a succession of tests agrees with (or fails to falsify) a hypothesis (hypothetico-deductive method), it is regarded as reasonable to treat the hypothesis as true, at all events until it is discredited by a subsequent test. The scientist is not concerned with providing a guarantee of his conclusion, since, however many tests support it, there remains the possibility that the next one will not. His concern is to convince himself and his critical colleagues that a hypothesis has passed enough tests to make it worth accepting until a better one presents itself.

The Newtonian paradigm
      Up to this point the investigation has been concerned exclusively with kinetics—that is to say, providing an accurate mathematical description of motion, in this case of a ball on an inclined plane, with no implied explanation of the physical processes responsible. Newton's general dynamic (dynamics) theory, as expounded in his Philosophiae Naturalis Principia Mathematica of 1687, laid down in the form of his laws of motion (Newton's laws of motion), together with other axioms and postulates, the rules to follow in analyzing the motion of bodies interacting among themselves. This theory of classical mechanics is described in detail in the article mechanics, but some general comments may be offered here. For the present purpose, it seems sufficient to consider only bodies moving along a straight line and acted upon by forces parallel to the motion. Newton's laws are, in fact, considerably more general than this and encompass motion in curves as a result of forces deflecting a body from its initial direction.

Laws of motion
      Newton's first law may more properly be ascribed to Galileo. It states that a body continues at rest or in uniform motion along a straight line unless it is acted upon by a force, and it enables one to recognize when a force is acting. A tennis ball struck by a racket experiences a sudden change in its motion attributable to a force exerted by the racket. The player feels the shock of the impact. According to Newton's third law (action and reaction are equal and opposite), the force that the ball exerts on the racket is equal and opposite to that which the racket exerts on the ball. Moreover, a second balanced action and reaction acts between player and racket.

      Newton's second law quantifies the concept of force, as well as that of inertia. A body acted upon by a steady force suffers constant acceleration. Thus a freely falling body or a ball rolling down a plane has constant acceleration, as has been seen, and this is to be interpreted in Newton's terms as evidence that the force of gravity, which causes the acceleration, is not changed by the body's motion (motion, equation of). The same force (e.g., applied by a string which includes a spring balance to check that the force is the same in different experiments) applied to different bodies causes different accelerations; and it is found that, if a chosen strength of force causes twice the acceleration in body A as it does in body B, then a different force also causes twice as much acceleration in A as in B. The ratio of accelerations is independent of the force and is therefore a property of the bodies alone. They are said to have inertia (or inertial mass) in inverse proportion to the accelerations. This experimental fact, which is the essence of Newton's second law, enables one to assign a number to every body that is a measure of its mass. Thus a certain body may be chosen as a standard of mass and assigned the number 1. Another body is said to have mass m if the body shows only a fraction 1/m of the acceleration of this standard when the two are subjected to the same force. By proceeding in this way, every body may be assigned a mass. It is because experiment allows this definition to be made that a given force causes every body to show acceleration f such that mf is the same for all bodies. This means that the product mf is determined only by the force and not by the particular body on which it acts, and mf is defined to be the numerical measure of the force. In this way a consistent set of measures of force and mass is arrived at, having the property that F = mf. In this equation F, m, and f are to be interpreted as numbers measuring the strength of the force, the magnitude of the mass, and the rate of acceleration; and the product of the numbers m and f is always equal to the number F. The product mv, called motus (motion) by Newton, is now termed momentum. Newton's second law states that the rate of change of momentum equals the strength of the applied force.

      In order to assign a numerical measure m to the mass of a body, a standard of mass must be chosen and assigned the value m = 1. Similarly, to measure displacement a unit of length is needed, and for velocity and acceleration a unit of time also must be defined. Given these, the numerical measure of a force follows from mf without need to define a unit of force. Thus, in the Système Internationale d'Unités (International System of Units) (SI), in which the units are the standard kilogram, the standard metre, and the standard second, a force of magnitude unity is one that, applied to a mass of one kilogram, causes its velocity to increase steadily by one metre per second during every second the force is acting.

Law of gravitation
      The idealized observation of Galileo that all bodies in free-fall accelerate equally implies that the gravitational (gravitation) force causing acceleration bears a constant relation to the inertial mass. According to Newton's postulated law of gravitation (Newton's law of gravitation), two bodies of mass m1 and m2, separated by a distance r, exert equal attractive forces on each other (the equal action and reaction of the third law of motion) of magnitude proportional to m1m2/r2. The constant of proportionality, G, in the gravitational law, F = Gm1m2/r2, is thus to be regarded as a universal constant, applying to all bodies, whatever their constitution. The constancy of gravitational acceleration, g, at a given point on the Earth is a particular case of this general law.

Application of Newton's laws
      In the same way that the timing of a pendulum provided a more rigorous test of Galileo's kinematical theory than could be achieved by direct testing with balls rolling down planes, so with Newton's laws the most searching tests are indirect and based on mathematically derived consequences. Kepler's laws of planetary motion are just such an example, and in the two centuries after Newton's Principia the laws were applied to elaborate and arduous computations of the motion of all planets, not simply as isolated bodies attracted by the Sun (solar system) but as a system in which every one perturbs (perturbation) the motion of the others by mutual gravitational interactions. (The work of the French mathematician and astronomer Pierre-Simon, Marquis de Laplace, was especially noteworthy.) Calculations of this kind have made it possible to predict the occurrence of eclipses (eclipse) many years ahead. Indeed, the history of past eclipses may be written with extraordinary precision so that, for instance, Thucydides' (Thucydides) account of the lunar eclipse that fatally delayed the Athenian expedition against Syracuse in 413 BC matches the calculations perfectly (see eclipse). Similarly, unexplained small departures from theoretical expectation of the motion of Uranus led John Couch Adams of England and Urbain-Jean-Joseph Le Verrier of France to predict in 1845 that a new planet ( Neptune) would be seen at a particular point in the heavens. The discovery of Pluto in 1930 was achieved in much the same way.

      There is no obvious reason why the inertial mass m that governs the response of a body to an applied force should also determine the gravitational force between two bodies, as described above. Consequently, the period of a pendulum is independent of its material and governed only by its length and the local value of g; this has been verified with an accuracy of a few parts per million. Still more sensitive tests, as originally devised by the Hungarian physicist Roland, Baron von Eötvös (1890), and repeated several times since, have demonstrated clearly that the accelerations of different bodies in a given gravitational environment are identical within a few parts in 1012. An astronaut in free orbit can remain poised motionless in the centre of the cabin of his spacecraft, surrounded by differently constituted objects, all equally motionless (except for their extremely weak mutual attractions) because all of them are identically affected by the gravitational field in which they are moving. He is unaware of the gravitational force, just as those on the Earth are unaware of the Sun's attraction, moving as they do with the Earth in free orbit around the Sun. Albert Einstein made this experimental finding a central feature of his general theory of relativity (see relativity).

Ensuing developments and their ramifications
      Newton believed that everything moved in relation to a fixed but undetectable spatial (space) frame so that it could be said to have an absolute velocity. Time also flowed at the same steady pace everywhere. Even if there were no matter in the universe, the frame of the universe would still exist, and time would still flow even though there was no one to observe its passage. In Newton's view, when matter is present it is unaffected by its motion through space. If the length of a moving metre stick were compared with the length of one at rest, they would be found to be the same. Clocks (clock) keep universal time whether they are moving or not; therefore, two identical clocks, initially synchronized, would still be synchronized after one had been carried into space and brought back. The laws of motion take such a form that they are not changed by uniform motion. They were devised to describe accurately the response of bodies to forces whether in the heavens or on the Earth, and they lose no validity as a result of the Earth's motion at 30 kilometres per second in its orbit around the Sun. This motion, in fact, would not be discernible by an observer in a closed box. The supposed invariance of the laws of motion, in addition to standards of measurement, to uniform translation was called “Galilean invariance” by Einstein.

      The impossibility of discerning absolute velocity led in Newton's time to critical doubts concerning the necessity of postulating an absolute frame of space and universal time, and the doubts of the philosophers George Berkeley and Gottfried Wilhelm Leibniz, among others, were still more forcibly presented in the severe analysis of the foundations of classical mechanics by the Austrian physicist Ernst Mach in 1883. James Clerk Maxwell (Maxwell, James Clerk)'s theory of electromagnetic phenomena (1865), including his description of light as electromagnetic (electromagnetic radiation) waves, brought the problem to a state of crisis. It became clear that if light waves were propagated in the hypothetical ether that filled all space and provided an embodiment of Newton's absolute frame (see below), it would not be logically consistent to accept both Maxwell's theory and the ideas expressed in Galilean invariance, for the speed of light as it passed an observer would reveal how rapidly he was traveling through the ether.

      Ingenious attempts by the physicists George FitzGerald of Ireland and Hendrik A. Lorentz of The Netherlands to devise a compromise to salvage the notion of ether were eventually superseded by Einstein's special theory of relativity (see relativity). Einstein proposed in 1905 that all laws of physics, not solely those of mechanics, must take the same form for observers moving uniformly relative to one another, however rapidly. In particular, if two observers, using identical metre sticks and clocks, set out to measure the speed of a light signal as it passes them, both would obtain the same value no matter what their relative velocity might be; in a Newtonian world, of course, the measured values would differ by the relative velocity of the two observers. This is but one example of the counterintuitive character of relativistic physics, but the deduced consequences of Einstein's postulate have been so frequently and so accurately verified by experiment that it has been incorporated as a fundamental axiom in physical theory.

      With the abandonment of the ether hypothesis, there has been a reversion to a philosophical standpoint reluctantly espoused by Newton. To him and to his contemporaries the idea that two bodies could exert gravitational forces on each other across immense distances of empty space was abhorrent. However, attempts to develop Descartes's notion of a space-filling fluid ether as a transmitting medium for forces invariably failed to account for the inverse square law. Newton himself adopted a pragmatic approach, deducing the consequences of his laws and showing how well they agreed with observation; he was by no means satisfied that a mechanical explanation was impossible, but he confessed in the celebrated remark, “Hypotheses non fingo” (Latin: “I frame no hypotheses”), that he had no solution to offer.

      A similar reversion to the safety of mathematical description is represented by the rejection, during the early 1900s, of the explanatory ether models of the 19th century and their replacement by model-free analysis in terms of relativity theory. This certainly does not imply giving up the use of models as imaginative aids in extending theories, predicting new effects, or devising interesting experiments; if nothing better is available, however, a mathematical formulation that yields verifiably correct results is to be preferred over an intuitively acceptable model that does not.

Interplay of experiment and theory (scientific theory)
      The foregoing discussion should have made clear that progress in physics, as in the other sciences, arises from a close interplay of experiment and theory. In a well-established field like classical mechanics, it may appear that experiment is almost unnecessary and all that is needed is the mathematical or computational skill to discover the solutions of the equations of motion. This view, however, overlooks the role of observation or experiment in setting up the problem in the first place. To discover the conditions under which a bicycle is stable in an upright position or can be made to turn a corner, it is first necessary to invent and observe a bicycle. The equations of motion are so general and serve as the basis for describing so extended a range of phenomena that the mathematician must usually look at the behaviour of real objects in order to select those that are both interesting and soluble. His analysis may indeed suggest the existence of interesting related effects that can be examined in the laboratory; thus, the invention or discovery of new things may be initiated by the experimenter or the theoretician. To employ terms such as this has led, especially in the 20th century, to a common assumption that experimentation and theorizing are distinct activities, rarely performed by the same person. It is true that almost all active physicists pursue their vocation primarily in one mode or the other. Nevertheless, the innovative experimenter can hardly make progress without an informed appreciation of the theoretical structure, even if he is not technically competent to find the solution of particular mathematical problems. By the same token, the innovative theorist must be deeply imbued with the way real objects behave, even if he is not technically competent to put together the apparatus to examine the problem. The fundamental unity of physical science should be borne in mind during the following outline of characteristic examples of experimental and theoretical physics.

Characteristic experimental procedures
Unexpected observation
      The discovery of X rays (X-ray) (1895) by Wilhelm Conrad Röntgen (Röntgen, Wilhelm Conrad) of Germany was certainly serendipitous. It began with his noticing that when an electric current was passed through a discharge tube a nearby fluorescent screen lit up, even though the tube was completely wrapped in black paper.

      Ernest Marsden, a student engaged on a project, reported to his professor, Ernest Rutherford (Rutherford, Ernest, Baron Rutherford of Nelson, of Cambridge) (then at the University of Manchester in England), that alpha particles (alpha particle) from a radioactive source were occasionally deflected more than 90° when they hit a thin metal foil. Astonished at this observation, Rutherford deliberated on the experimental data to formulate his nuclear model of the atom (Rutherford atomic model) (1911).

      Heike Kamerlingh Onnes (Kamerlingh Onnes, Heike) of The Netherlands, the first to liquefy helium, cooled a thread of mercury to within 4 K of absolute zero (4 K equals -269° C) to test his belief that electrical resistance would tend to vanish at zero. This was what the first experiment seemed to verify, but a more careful repetition showed that instead of falling gradually, as he expected, all trace of resistance disappeared abruptly just above 4 K. This phenomenon of superconductivity, which Kamerlingh Onnes discovered in 1911, defied theoretical explanation until 1957.

The not-so-unexpected chance
      From 1807 the Danish physicist and chemist Hans Christian Ørsted (Ørsted, Hans Christian) came to believe that electrical phenomena could influence magnets (magnetism), but it was not until 1819 that he turned his investigations to the effects produced by an electric current. On the basis of his tentative models he tried on several occasions to see if a current in a wire caused a magnet needle to turn when it was placed transverse to the wire, but without success. Only when it occurred to him, without forethought, to arrange the needle parallel on the wire did the long-sought effect appear.

      A second example of this type of experimental situation involves the discovery of electromagnetic induction (induction, electromagnetic) by the English physicist and chemist Michael Faraday (Faraday, Michael). Aware that an electrically charged body induces a charge in a nearby body, Faraday sought to determine whether a steady current in a coil of wire would induce such a current in another short-circuited coil close to it. He found no effect except in instances where the current in the first coil was switched on or off, at which time a momentary current appeared in the other. He was in effect led to the concept of electromagnetic induction by changing magnetic fields.

Qualitative tests to distinguish alternative theories
      At the time that Augustin-Jean Fresnel (Fresnel, Augustin-Jean) presented his wave theory of light to the French Academy (1815), the leading physicists were adherents of Newton's corpuscular theory. It was pointed out by Siméon-Denis Poisson, as a fatal objection, that Fresnel's theory predicted a bright spot at the very centre of the shadow cast by a circular obstacle. When this was in fact observed by François Arago, Fresnel's theory was immediately accepted.

      Another qualitative difference between the wave and corpuscular theories concerned the speed of light in a transparent medium. To explain the bending of light rays toward the normal to the surface when light entered the medium, the corpuscular theory demanded that light go faster while the wave theory required that it go slower. Jean-Bernard-Léon Foucault showed that the latter was correct (1850).

      The three categories of experiments or observations discussed above are those that do not demand high-precision measurement. The following, however, are categories in which measurement at varying degrees of precision is involved.

Direct comparison of theory and experiment
      This is one of the commonest experimental situations. Typically, a theoretical model makes certain specific predictions, perhaps novel in character, perhaps novel only in differing from the predictions of competing theories. There is no fixed standard by which the precision of measurement may be judged adequate. As is usual in science, the essential question is whether the conclusion carries conviction, and this is conditioned by the strength of opinion regarding alternative conclusions.

      Where strong prejudice obtains, opponents of a heterodox conclusion may delay acceptance indefinitely by insisting on a degree of scrupulosity in experimental procedure that they would unhesitatingly dispense with in other circumstances. For example, few experiments in paranormal phenomena, such as clairvoyance, which have given positive results under apparently stringent conditions, have made converts among scientists. In the strictly physical domain, the search for ether drift provides an interesting study. At the height of acceptance of the hypothesis that light waves are carried by a pervasive ether, the question of whether the motion of the Earth through space dragged the ether with it was tested (1887) by A.A. Michelson and Edward W. Morley (Michelson-Morley experiment) of the United States by looking for variations in the velocity of light as it traveled in different directions in the laboratory. Their conclusion was that there was a small variation, considerably less than the Earth's velocity in its orbit around the Sun, and that the ether was therefore substantially entrained in the Earth's motion. According to Einstein's relativity theory (1905), no variation should have been observed, but during the next 20 years another American investigator, Dayton C. Miller, repeated the experiment many times in different situations and concluded that, at least on a mountaintop, there was a real “ether wind” of about 10 kilometres per second. Although Miller's final presentation was a model of clear exposition, with evidence scrupulously displayed and discussed, it has been set aside and virtually forgotten. This is partly because other experiments failed to show the effect; however, their conditions were not strictly comparable, since few, if any, were conducted on mountaintops. More significantly, other tests of relativity theory supported it in so many different ways as to lead to the consensus that one discrepant set of observations cannot be allowed to weigh against the theory.

      At the opposite extreme may be cited the 1919 expedition of the English scientist-mathematician Arthur Stanley Eddington (Eddington, Sir Arthur Stanley) to measure the very small deflection of the light from a star as it passed close to the Sun—a measurement that requires a total eclipse. The theories involved here were Einstein's general theory of relativity and the Newtonian particle theory of light, which predicted only half the relativistic effect. The conclusion of this exceedingly difficult measurement—that Einstein's theory was followed within the experimental limits of error, which amounted to ±30 percent—was the signal for worldwide feting of Einstein. If his theory had not appealed aesthetically to those able to appreciate it and if there had been any passionate adherents to the Newtonian view, the scope for error could well have been made the excuse for a long drawn-out struggle, especially since several repetitions at subsequent eclipses did little to improve the accuracy. In this case, then, the desire to believe was easily satisfied. It is gratifying to note that recent advances in radio astronomy have allowed much greater accuracy to be achieved, and Einstein's prediction is now verified within about 1 percent.

      During the decade after his expedition, Eddington developed an extremely abstruse fundamental theory that led him to assert that the quantity hc/2πe2 (h is Planck's constant, c the velocity of light, and e the charge on the electron) must take the value 137 exactly. At the time, uncertainties in the values of h and e allowed its measured value to be given as 137.29 ± 0.11; in accordance with the theory of errors, this implies that there was estimated to be about a 1 percent chance that a perfectly precise measurement would give 137. In the light of Eddington's great authority there were many prepared to accede to his belief. Since then the measured value of this quantity has come much closer to Eddington's prediction and is given as 137.03604 ± 0.00011. The discrepancy, though small, is 330 times the estimated error, compared with 2.6 times for the earlier measurement, and therefore a much more weighty indication against Eddington's theory. As the intervening years have cast no light on the virtual impenetrability of his argument, there is now hardly a physicist who takes it seriously.

Compilation of data
      Technical design, whether of laboratory instruments or for industry and commerce, depends on knowledge of the properties of materials (density, strength, electrical conductivity, etc.), some of which can only be found by very elaborate experiments (e.g., those dealing with the masses and excited states of atomic nuclei). One of the important functions of standards laboratories is to improve and extend the vast body of factual information, but much also arises incidentally rather than as the prime objective of an investigation or may be accumulated in the hope of discovering regularities or to test the theory of a phenomenon against a variety of occurrences.

      When chemical compounds (chemical compound) are heated in a flame, the resulting colour can be used to diagnose the presence of sodium (orange), copper (green-blue), and many other elements. This procedure has long been used. Spectroscopic (spectroscopy) examination shows that every element has its characteristic set of spectral lines, and the discovery by the Swiss mathematician Johann Jakob Balmer of a simple arithmetic formula relating the wavelengths (wavelength) of lines in the hydrogen spectrum (1885) proved to be the start of intense activity in precise wavelength measurements of all known elements and the search for general principles. With the Danish physicist Niels Bohr's quantum theory of the hydrogen atom (1913) began an understanding of the basis of Balmer's formula; thenceforward spectroscopic evidence underpinned successive developments toward what is now a successful theory of atomic structure.

Tests of fundamental concepts
       Coulomb's law states that the force between two electric charges varies as the inverse square of their separation. Direct tests, such as those performed with a special torsion balance by the French physicist Charles-Augustin de Coulomb, for whom the law is named, can be at best approximate. A very sensitive indirect test, devised by the English scientist and clergyman Joseph Priestley (following an observation by Benjamin Franklin) but first realized by the English physicist and chemist Henry Cavendish (1771), relies on the mathematical demonstration that no electrical changes occurring outside a closed metal shell—as, for example, by connecting it to a high voltage source—produce any effect inside if the inverse square law holds. Since modern amplifiers can detect minute voltage changes, this test can be made very sensitive. It is typical of the class of null measurements in which only the theoretically expected behaviour leads to no response and any hypothetical departure from theory gives rise to a response of calculated magnitude. It has been shown in this way that if the force between charges, r apart, is proportional not to 1/r2 but to 1/r2+x, then x is less than 2 × 10-9.

      According to the relativistic theory of the hydrogen atom proposed by the English physicist P.A.M. Dirac (Dirac, P.A.M.) (1928), there should be two different excited states exactly coinciding in energy. Measurements of spectral lines resulting from transitions in which these states were involved hinted at minute discrepancies, however. Some years later (c. 1950) Willis E. Lamb, Jr. (Lamb, Willis Eugene, Jr.), and Robert C. Retherford of the United States, employing the novel microwave techniques that wartime radar contributed to peacetime research, were able not only to detect the energy difference between the two levels directly but to measure it rather precisely as well. The difference in energy, compared to the energy above the ground state, amounts to only four parts in 10,000,000, but this was one of the crucial pieces of evidence that led to the development of quantum electrodynamics, a central feature of the modern theory of fundamental particles (see subatomic particle: Quantum electrodynamics (subatomic particle)).

Characteristic theoretical procedures
      Only at rare intervals in the development of a subject, and then only with the involvement of a few, are theoretical physicists engaged in introducing radically new concepts. The normal practice is to apply established principles to new problems so as to extend the range of phenomena that can be understood in some detail in terms of accepted fundamental ideas. Even when, as with the quantum mechanics of Werner Heisenberg (Heisenberg, Werner) (formulated in terms of matrices; 1925) and of Erwin Schrödinger (Schrödinger, Erwin) (developed on the basis of wave functions; 1926), a major revolution is initiated, most of the accompanying theoretical activity involves investigating the consequences of the new hypothesis as if it were fully established in order to discover critical tests against experimental facts. There is little to be gained by attempting to classify the process of revolutionary thought because every case history throws up a different pattern. What follows is a description of typical procedures as normally used in theoretical physics. As in the preceding section, it will be taken for granted that the essential preliminary of coming to grips with the nature of the problem in general descriptive terms has been accomplished, so that the stage is set for systematic, usually mathematical, analysis.

Direct solution of fundamental equations
      Insofar as the Sun (solar system) and planets, with their attendant satellites, can be treated as concentrated masses moving under their mutual gravitational influences, they form a system that has not so overwhelmingly many separate units as to rule out step-by-step calculation of the motion of each. Modern high-speed computers are admirably adapted to this task and are used in this way to plan space missions and to decide on fine adjustments during flight. Most physical systems of interest, however, are either composed of too many units or are governed not by the rules of classical mechanics but rather by quantum mechanics, which is much less suited for direct computation.

  The mechanical behaviour of a body is analyzed in terms of Newton's laws of motion by imagining it dissected into a number of parts, each of which is directly amenable to the application of the laws or has been separately analyzed by further dissection so that the rules governing its overall behaviour are known. A very simple illustration of the method is given by the arrangement in Figure 5A—>, where two masses are joined by a light string passing over a pulley. The heavier mass, m1, falls with constant acceleration, but what is the magnitude of the acceleration? If the string were cut, each mass would experience the force, m1g or m2g, due to its gravitational attraction and would fall with acceleration g. The fact that the string prevents this is taken into account by assuming that it is in tension and also acts on each mass. When the string is cut just above m2, the state of accelerated motion just before the cut can be restored by applying equal and opposite forces (in accordance with Newton's third law) to the cut ends, as in Figure 5B—>; the string above the cut pulls the string below upward with a force T, while the string below pulls that above downward to the same extent. As yet, the value of T is not known. Now if the string is light, the tension T is sensibly the same everywhere along it, as may be seen by imagining a second cut, higher up, to leave a length of string acted upon by T at the bottom and possibly a different force T′ at the second cut. The total force T - T′ on the string must be very small if the cut piece is not to accelerate violently, and, if the mass of the string is neglected altogether, T and T′ must be equal. This does not apply to the tension on the two sides of the pulley, for some resultant force will be needed to give it the correct accelerative motion as the masses move. This is a case for separate examination, by further dissection, of the forces needed to cause rotational acceleration. To simplify the problem one can assume the pulley to be so light that the difference in tension on the two sides is negligible. Then the problem has been reduced to two elementary parts—on the right the upward force on m2 is T - m2g, so that its acceleration upward is T/m2 - g; and on the left the downward force on m1 is m1g - T, so that its acceleration downward is g - T/m1. If the string cannot be extended, these two accelerations must be identical, from which it follows that T = 2m1m2g/(m1 + m2) and the acceleration of each mass is g(m1 - m2)/(m1 + m2). Thus if one mass is twice the other (m1 = 2m2), its acceleration downward is g/3.

      A liquid may be imagined divided into small volume elements, each of which moves in response to gravity and the forces imposed by its neighbours (pressure and viscous drag). The forces are constrained by the requirement that the elements remain in contact, even though their shapes and relative positions may change with the flow. From such considerations are derived the differential equations that describe fluid motion (see fluid mechanics).

      The dissection of a system into many simple units in order to describe the behaviour of a complex structure in terms of the laws governing the elementary components is sometimes referred to, often with a pejorative implication, as reductionism. Insofar as it may encourage concentration on those properties of the structure that can be explained as the sum of elementary processes to the detriment of properties that arise only from the operation of the complete structure, the criticism must be considered seriously. The physical scientist is, however, well aware of the existence of the problem (see below Simplicity and complexity (physical science, principles of)). If he is usually unrepentant about his reductionist stance, it is because this analytical procedure is the only systematic procedure he knows, and it is one that has yielded virtually the whole harvest of scientific inquiry. What is set up as a contrast to reductionism by its critics is commonly called the holistic approach, whose title confers a semblance of high-mindedness while hiding the poverty of tangible results it has produced.

Simplified models
      The process of dissection was early taken to its limit in the kinetic theory of gases, which in its modern form essentially started with the suggestion of the Swiss mathematician Daniel Bernoulli (Bernoulli, Daniel) (in 1738) that the pressure exerted by a gas on the walls of its container is the sum of innumerable collisions by individual molecules, all moving independently of each other. Boyle's law—that the pressure exerted by a given gas is proportional to its density if the temperature is kept constant as the gas is compressed or expanded—follows immediately from Bernoulli's assumption that the mean speed of the molecules is determined by temperature alone. Departures from Boyle's law require for their explanation the assumption of forces between the molecules. It is very difficult to calculate the magnitude of these forces from first principles, but reasonable guesses about their form led Maxwell (1860) and later workers to explain in some detail the variation with temperature of thermal conductivity and viscosity, while the Dutch physicist Johannes Diederik van der Waals (Waals, Johannes Diederik van der) (1873) gave the first theoretical account of the condensation to liquid and the critical temperature above which condensation does not occur.

      The first quantum mechanical treatment of electrical conduction in metals (free-electron model of metals) was provided in 1928 by the German physicist Arnold Sommerfeld (Sommerfeld, Arnold), who used a greatly simplified model in which electrons (electron) were assumed to roam freely (much like non-interacting molecules of a gas) within the metal as if it were a hollow container. The most remarkable simplification, justified at the time by its success rather than by any physical argument, was that the electrical force between electrons could be neglected. Since then, justification—without which the theory would have been impossibly complicated—has been provided in the sense that means have been devised to take account of the interactions whose effect is indeed considerably weaker than might have been supposed. In addition, the influence of the lattice of atoms on electronic motion has been worked out for many different metals. This development involved experimenters and theoreticians working in harness; the results of specially revealing experiments served to check the validity of approximations without which the calculations would have required excessive computing time.

      These examples serve to show how real problems almost always demand the invention of models in which, it is hoped, the most important features are correctly incorporated while less-essential features are initially ignored and allowed for later if experiment shows their influence not to be negligible. In almost all branches of mathematical physics there are systematic procedures—namely, perturbation techniques—for adjusting approximately correct models so that they represent the real situation more closely.

Recasting of basic theory
      Newton's laws of motion and of gravitation and Coulomb's law for the forces between charged particles lead to the idea of energy as a quantity that is conserved in a wide range of phenomena (see below Conservation laws and extremal principles (physical science, principles of)). It is frequently more convenient to use conservation of energy and other quantities than to start an analysis from the primitive laws. Other procedures are based on showing that, of all conceivable outcomes, the one followed is that for which a particular quantity takes a maximum or a minimum value—e.g., entropy change in thermodynamic processes, action in mechanical processes, and optical path length for light rays.

General observations
      The foregoing accounts of characteristic experimental and theoretical procedures are necessarily far from exhaustive. In particular, they say too little about the technical background to the work of the physical scientist. The mathematical techniques used by the modern theoretical physicist are frequently borrowed from the pure mathematics of past eras. The work of Augustin-Louis Cauchy on functions of a complex variable, of Arthur Cayley and James Joseph Sylvester on matrix algebra, and of Bernhard Riemann on non-Euclidean geometry, to name but a few, were investigations undertaken with little or no thought for practical applications.

      The experimental physicist, for his part, has benefited greatly from technological (technology) progress and from instrumental developments that were undertaken in full knowledge of their potential research application but were nevertheless the product of single-minded devotion to the perfecting of an instrument as a worthy thing-in-itself. The developments during World War II provide the first outstanding example of technology harnessed on a national scale to meet a national need. Postwar advances in nuclear physics and in electronic circuitry, applied to almost all branches of research, were founded on the incidental results of this unprecedented scientific enterprise. The semiconductor industry sprang from the successes of microwave radar and, in its turn, through the transistor, made possible the development of reliable computers with power undreamed of by the wartime pioneers of electronic computing. From all these, the research scientist has acquired the means to explore otherwise inaccessible problems. Of course, not all of the important tools of modern-day science were the by-products of wartime research. The electron microscope is a good case in point. Moreover, this instrument may be regarded as a typical example of the sophisticated equipment to be found in all physical laboratories, of a complexity that the research-oriented user frequently does not understand in detail, and whose design depended on skills he rarely possesses.

      It should not be thought that the physicist does not give a just return for the tools he borrows. Engineering and technology are deeply indebted to pure science, while much modern pure mathematics can be traced back to investigations originally undertaken to elucidate a scientific problem.

Concepts fundamental to the attitudes and methods of physical science

Fields
       Newton's law of gravitation and Coulomb's electrostatic law (Coulomb's law) both give the force between two particles as inversely proportional to the square of their separation and directed along the line joining them. The force acting on one particle is a vector. It can be represented by a line with arrowhead; the length of the line is made proportional to the strength of the force, and the direction of the arrow shows the direction of the force. If a number of particles are acting simultaneously on the one considered, the resultant force is found by vector addition; the vectors representing each separate force are joined head to tail, and the resultant is given by the line joining the first tail to the last head.

      In what follows the electrostatic force (electric field) will be taken as typical, and Coulomb's law is expressed in the form F = q1q2r/4πε0r3. The boldface characters F and r are vectors, F being the force which a point charge q1 exerts on another point charge q2. The combination r/r3 is a vector in the direction of r, the line joining q1 to q2, with magnitude 1/r2 as required by the inverse square law. When r is rendered in lightface, it means simply the magnitude of the vector r, without direction. The combination 4πε0 is a constant whose value is irrelevant to the present discussion. The combination q1r/4πε0r3 is called the electric field strength due to q1 at a distance r from q1 and is designated by E; it is clearly a vector parallel to r. At every point in space E takes a different value, determined by r, and the complete specification of E(r)—that is, the magnitude and direction of E at every point r—defines the electric field. If there are a number of different fixed charges, each produces its own electric field of inverse square character, and the resultant E at any point is the vector sum of the separate contributions. Thus the magnitude and direction of E may change in a complicated fashion from point to point. Any particle carrying charge q that is put in a place where the field is E experiences a force qE (provided the other charges are not displaced when it is inserted; if they are E(r) must be recalculated for the actual positions of the charges).

      A vector field, varying from point to point, is not always easily represented by a diagram, and it is often helpful for this purpose, as well as in mathematical analysis, to introduce the potential ϕ, from which E may be deduced. To appreciate its significance, the concept of vector gradient must be explained.

 The contours on a standard map are lines along which the height of the ground above sea level is constant. They usually take a complicated form, but if one imagines contours drawn at very close intervals of height and a small portion of the map to be greatly enlarged, the contours of this local region will become very nearly straight, like the two drawn in Figure 6—> for heights h and h + δh.

      Walking along any of these contours, one remains on the level. The slope of the ground is steepest along PQ, and, if the distance from P to Q is δl, the gradient is δhl or dh/dl in the limit when δh and δl are allowed to go to zero. The vector gradient is a vector of this magnitude drawn parallel to PQ and is written as grad h, or ∇h. Walking along any other line PR at an angle θ to PQ, the slope is less in the ratio PQ/PR, or cos θ. The slope along PR is (grad h) cos θ and is the component of the vector grad h along a line at an angle θ to the vector itself. This is an example of the general rule for finding components of vectors. In particular, the components parallel to the x and y directions have magnitude ∂h/∂x and ∂h/∂y (the partial derivatives, represented by the symbol ∂, mean, for instance, that ∂h/∂x is the rate at which h changes with distance in the x direction, if one moves so as to keep y constant; and ∂h/∂y is the rate of change in the y direction, x being constant). This result is expressed by

      the quantities in brackets being the components of the vector along the coordinate axes. Vector quantities that vary in three dimensions can similarly be represented by three Cartesian components, along x, y, and z axes; e.g., V = (Vx, Vy, Vz).

 Imagine a line, not necessarily straight, drawn between two points A and B and marked off in innumerable small elements like δl in Figure 7—>, which is to be thought of as a vector. If a vector field takes a value V at this point, the quantity Vδl·cos θ is called the scalar product of the two vectors V and δl and is written as V·δl. The sum of all similar contributions from the different δl gives, in the limit when the elements are made infinitesimally small, the line integral
V ·dl along the line chosen.

      Reverting to the contour map, it will be seen that

(grad hdl is just the vertical height of B above A and that the value of the line integral is the same for all choices of line joining the two points. When a scalar quantity ϕ, having magnitude but not direction, is uniquely defined at every point in space, as h is on a two-dimensional map, the vector grad ϕ is then said to be irrotational, and ϕ(r) is the potential function from which a vector field grad ϕ can be derived. Not all vector fields can be derived from a potential function, but the Coulomb and gravitational fields are of this form.

      A potential function ϕ(r) defined by ϕ = A/r, where A is a constant, takes a constant value on every sphere centred at the origin. The set of nesting spheres is the analogue in three dimensions of the contours of height on a map, and grad ϕ at a point r is a vector pointing normal to the sphere that passes through r; it therefore lies along the radius through r, and has magnitude -A/r2. That is to say, grad ϕ = -Ar/r3 and describes a field of inverse square form. If A is set equal to q1/4πε0, the electrostatic field due to a charge q1 at the origin is E = -grad ϕ.

 When the field is produced by a number of point charges, each contributes to the potential ϕ(r) in proportion to the size of the charge and inversely as the distance from the charge to the point r. To find the field strength E at r, the potential contributions can be added as numbers and contours of the resultant ϕ plotted; from these E follows by calculating -grad ϕ. By the use of the potential, the necessity of vector addition of individual field contributions is avoided. An example of equipotentials is shown in Figure 8—>. Each is determined by the equation 3/r1 - 1/r2 = constant, with a different constant value for each, as shown. For any two charges of opposite sign, the equipotential surface, ϕ = 0, is a sphere, as no other is.

Conservative forces (conservative force)
      The inverse square laws of gravitation and electrostatics are examples of central forces where the force exerted by one particle on another is along the line joining them and is also independent of direction. Whatever the variation of force with distance, a central force can always be represented by a potential; forces for which a potential can be found are called conservative. The work done by the force F(r) on a particle as it moves along a line from A to B is the line integral

F ·dl, or
grad ϕ·dl if F is derived from a potential ϕ, and this integral is just the difference between ϕ at A and B.

      The ionized hydrogen molecule consists of two protons (proton) bound together by a single electron, which spends a large fraction of its time in the region between the protons. Considering the force acting on one of the protons, one sees that it is attracted by the electron, when it is in the middle, more strongly than it is repelled by the other proton. This argument is not precise enough to prove that the resultant force is attractive, but an exact quantum mechanical calculation shows that it is if the protons are not too close together. At close approach proton repulsion dominates, but as one moves the protons apart the attractive force rises to a peak and then soon falls to a low value. The distance, 1.06 × 10-10 metre, at which the force changes sign, corresponds to the potential ϕ taking its lowest value and is the equilibrium separation of the protons in the ion. This is an example of a central force field that is far from inverse square in character.

      A similar attractive force arising from a particle shared between others is found in the strong nuclear force (strong force) that holds the atomic nucleus together. The simplest example is the deuteron, the nucleus of heavy hydrogen, which consists either of a proton and a neutron or of two neutrons bound by a positive pion (a meson that has a mass 273 times that of an electron when in the free state). There is no repulsive force between the neutrons analogous to the Coulomb repulsion between the protons in the hydrogen ion, and the variation of the attractive force with distance follows the law F = (g2/r2)e-r/r0, in which g is a constant analogous to charge in electrostatics and r0 is a distance of 1.4 × 10-15 metre, which is something like the separation of individual protons and neutrons in a nucleus. At separations closer than r0, the law of force approximates to an inverse square attraction, but the exponential term kills the attractive force when r is only a few times r0 (e.g., when r is 5r0, the exponential reduces the force 150 times).

      Since strong nuclear forces at distances less than r0 share an inverse square law with gravitational and Coulomb forces, a direct comparison of their strengths is possible. The gravitational force between two protons at a given distance is only about 5 × 10-39 times as strong as the Coulomb force at the same separation, which itself is 1,400 times weaker than the strong nuclear force. The nuclear force is therefore able to hold together a nucleus consisting of protons and neutrons in spite of the Coulomb repulsion of the protons. On the scale of nuclei and atoms, gravitational forces are quite negligible; they only make themselves felt when extremely large numbers of electrically neutral atoms are involved, as on a terrestrial or a cosmological scale.

Field lines (force, line of)
 The vector field, V = -grad ϕ, associated with a potential ϕ is always directed normal to the equipotential surfaces, and the variations in space of its direction can be represented by continuous lines drawn accordingly, like those in Figure 8—>. The arrows show the direction of the force that would act on a positive charge; they thus point away from the charge +3 in its vicinity and toward the charge -1. If the field is of inverse square character (gravitational, electrostatic), the field lines may be drawn to represent both direction and strength of field. Thus from an isolated charge q a large number of radial lines may be drawn, filling the solid angle evenly. Since the field strength falls away as 1/r2 and the area of a sphere centred on the charge increases as r2, the number of lines crossing unit area on each sphere varies as 1/r2, in the same way as the field strength. In this case, the density of lines crossing an element of area normal to the lines represents the field strength at that point. The result may be generalized to apply to any distribution of point charges. The field lines are drawn so as to be continuous everywhere except at the charges themselves, which act as sources of lines. From every positive charge q, lines emerge (i.e., with outward-pointing arrows) in number proportional to q, while a similarly proportionate number enter negative charge -q. The density of lines then gives a measure of the field strength at any point. This elegant construction holds only for inverse square forces.

Gauss's theorem (Gauss's law)
      At any point in space one may define an element of area dS by drawing a small, flat, closed loop. The area contained within the loop gives the magnitude of the vector area dS, and the arrow representing its direction is drawn normal to the loop. Then, if the electric field in the region of the elementary area is E, the flux through the element is defined as the product of the magnitude dS and the component of E normal to the element—i.e., the scalar product E · dS. A charge q at the centre of a sphere of radius r generates a field ε = qr/4πε0r3 on the surface of the sphere whose area is 4πr2, and the total flux through the surface is ∫SE · dS = q0. This is independent of r, and the German mathematician Karl Friedrich Gauss showed that it does not depend on q being at the centre nor even on the surrounding surface being spherical. The total flux of ε through a closed surface is equal to 1/ε0 times the total charge contained within it, irrespective of how that charge is arranged. It is readily seen that this result is consistent with the statement in the preceding paragraph—if every charge q within the surface is the source of q0 field lines, and these lines are continuous except at the charges, the total number leaving through the surface is Q0, where Q is the total charge. Charges outside the surface contribute nothing, since their lines enter and leave again.

      Gauss's theorem takes the same form in gravitational theory (gravitation), the flux of gravitational field lines through a closed surface being determined by the total mass within. This enables a proof to be given immediately of a problem that caused Newton considerable trouble. He was able to show, by direct summation over all the elements, that a uniform sphere of matter attracts bodies outside as if the whole mass of the sphere were concentrated at its centre. Now it is obvious by symmetry that the field has the same magnitude everywhere on the surface of the sphere, and this symmetry is unaltered by collapsing the mass to a point at the centre. According to Gauss's theorem, the total flux is unchanged, and the magnitude of the field must therefore be the same. This is an example of the power of a field theory over the earlier point of view by which each interaction between particles was dealt with individually and the result summed.

Images
      A second example illustrating the value of field theories arises when the distribution of charges (electric charge) is not initially known, as when a charge q is brought close to a piece of metal or other electrical conductor and experiences a force. When an electric field is applied to a conductor, charge moves in it; so long as the field is maintained and charge can enter or leave, this movement of charge continues and is perceived as a steady electric current. An isolated piece of conductor, however, cannot carry a steady current indefinitely because there is nowhere for the charge to come from or go to. When q is brought close to the metal, its electric field causes a shift of charge in the metal to a new configuration in which its field exactly cancels the field due to q everywhere on and inside the conductor. The force experienced by q is its interaction with the canceling field. It is clearly a serious problem to calculate E everywhere for an arbitrary distribution of charge, and then to adjust the distribution to make it vanish on the conductor. When, however, it is recognized that after the system has settled down, the surface of the conductor must have the same value of ϕ everywhere, so that E = -grad ϕ vanishes on the surface, a number of specific solutions can easily be found.

 In Figure 8—>, for instance, the equipotential surface ϕ = 0 is a sphere. If a sphere of uncharged metal is built to coincide with this equipotential, it will not disturb the field in any way. Moreover, once it is constructed, the charge -1 inside may be moved around without altering the field pattern outside, which therefore describes what the field lines look like when a charge +3 is moved to the appropriate distance away from a conducting sphere carrying charge -1. More usefully, if the conducting sphere is momentarily connected to the Earth (which acts as a large body capable of supplying charge to the sphere without suffering a change in its own potential), the required charge -1 flows to set up this field pattern. This result can be generalized as follows: if a positive charge q is placed at a distance r from the centre of a conducting sphere of radius a connected to the Earth, the resulting field outside the sphere is the same as if, instead of the sphere, a negative charge q′ = -(a/r)q had been placed at a distance r′ = r(1 - a2/r2) from q on a line joining it to the centre of the sphere. And q is consequently attracted toward the sphere with a force qq′/4πε0r2, or q2ar/4πε0(r2 - a2)2. The fictitious charge -q′ behaves somewhat, but not exactly, like the image of q in a spherical mirror, and hence this way of constructing solutions, of which there are many examples, is called the method of images.

Divergence and Laplace's equation
      When charges are not isolated points but form a continuous distribution with a local charge density ρ being the ratio of the charge δq in a small cell to the volume δv of the cell, then the flux of E over the surface of the cell is ρδv0, by Gauss's theorem, and is proportional to δv. The ratio of the flux to δv is called the divergence of E and is written div E. It is related to the charge density by the equation div E = ρ/ε0. If E is expressed by its Cartesian components (εx, εy, εz,),

      And since Ex = −∂ϕ/dx, etc.,

      The expression on the left side is usually written as ∇2ϕ and is called the Laplacian of ϕ. It has the property, as is obvious from its relationship to ρ, of being unchanged if the Cartesian axes of x, y, and z are turned bodily into any new orientation.

      If any region of space is free of charges, ρ = o and ∇2ϕ = 0 in this region. The latter is Laplace's equation, for which many methods of solution are available, providing a powerful means of finding electrostatic (or gravitational) field patterns.

Non-conservative fields
 The magnetic field B is an example of a vector field that cannot in general be described as the gradient of a scalar potential. There are no isolated poles to provide, as electric charges do, sources for the field lines. Instead, the field is generated by currents and forms vortex patterns around any current-carrying conductor. Figure 9—> shows the field lines for a single straight wire. If one forms the line integral ∫B·dl around the closed path formed by any one of these field lines, each increment B·δl has the same sign and, obviously, the integral cannot vanish as it does for an electrostatic field. The value it takes is proportional to the total current enclosed by the path. Thus every path that encloses the conductor yields the same value for ∫B·dl; i.e., μ0I, where I is the current and μ0 is a constant for any particular choice of units in which B, l, and I are to be measured.

 If no current is enclosed by the path, the line integral vanishes and a potential ϕB may be defined. Indeed, in the example shown in Figure 9—>, a potential may be defined even for paths that enclose the conductor, but it is many-valued because it increases by a standard increment μ0I every time the path encircles the current. A contour map of height would represent a spiral staircase (or, better, a spiral ramp) by a similar many-valued contour. The conductor carrying I is in this case the axis of the ramp. Like E in a charge-free region, where div E = 0, so also div B = 0; and where ϕB may be defined, it obeys Laplace's equation, ∇2ϕB = 0.

      Within a conductor carrying a current or any region in which current is distributed rather than closely confined to a thin wire, no potential ϕB can be defined. For now the change in ϕB after traversing a closed path is no longer zero or an integral multiple of a constant μ0I but is rather μ0 times the current enclosed in the path and therefore depends on the path chosen. To relate the magnetic field to the current, a new function is needed, the curl, whose name suggests the connection with circulating field lines.

      The curl of a vector, say, curl B, is itself a vector quantity. To find the component of curl B along any chosen direction, draw a small closed path of area A lying in the plane normal to that direction, and evaluate the line integral ∫B·dl around the path. As the path is shrunk in size, the integral diminishes with the area, and the limit of A-1B·dl is the component of curl B in the chosen direction. The direction in which the vector curl B points is the direction in which A-1B·dl is largest.

      To apply this to the magnetic field in a conductor carrying current, the current density J is defined as a vector pointing along the direction of current flow, and the magnitude of J is such that JA is the total current flowing across a small area A normal to J. Now the line integral of B around the edge of this area is A curl B if A is very small, and this must equal μ0 times the contained current. It follows that

      Expressed in Cartesian coordinates,

      with similar expressions for Jy and Jz. These are the differential equations relating the magnetic field to the currents that generate it.

      A magnetic field also may be generated by a changing electric field, and an electric field by a changing magnetic field. The description of these physical processes by differential equations relating curl B to ∂E/∂τ, and curl E to ∂B/∂τ is the heart of Maxwell's electromagnetic theory (electromagnetism) and illustrates the power of the mathematical methods characteristic of field theories. Further examples will be found in the mathematical description of fluid motion, in which the local velocity v(r) of fluid particles constitutes a field to which the notions of divergence and curl are naturally applicable.

Examples of differential equations for fields (differential equation)

      An incompressible fluid flows so that the net flux of fluid into or out of a given volume within the fluid is zero. Since the divergence of a vector describes the net flux out of an infinitesimal element, divided by the volume of the element, the velocity vector v in an incompressible fluid must obey the equation div v = 0. If the fluid is compressible, however, and its density ρ(r) varies with position because of pressure or temperature variations, the net outward flux of mass from some small element is determined by div (ρv), and this must be related to the rate at which the density of the fluid within is changing:

      A dissolved molecule or a small particle suspended in a fluid is constantly struck at random by molecules of the fluid in its neighbourhood, as a result of which it wanders erratically. This is called Brownian motion in the case of suspended particles. It is usually safe to assume that each one in a cloud of similar particles is moved by collisions from the fluid and not by interaction between the particles themselves. When a dense cloud gradually spreads out, much like a drop of ink in a beaker of water, this diffusive motion is the consequence of random, independent wandering by each particle. Two equations can be written to describe the average behaviour. The first is a continuity equation: if there are n(r) particles per unit volume around the point r, and the flux of particles across an element of area is described by a vector F, meaning the number of particles crossing unit area normal to F in unit time,

      describes the conservation of particles. Secondly, Fick's law states that the random wandering causes an average drift of particles from regions where they are denser to regions where they are rarer, and that the mean drift rate is proportional to the gradient of density and in the opposite sense to the gradient:

      where D is a constant—the diffusion constant.

      These two equations can be combined into one differential equation for the changes that n will undergo,

      which defines uniquely how any initial distribution of particles will develop with time. Thus the spreading of a small drop of ink is rather closely described by the particular solution,

 in which C is a constant determined by the total number of particles in the ink drop. When t is very small at the start of the process, all the particles are clustered near the origin of r, but, as t increases, the radius of the cluster increases in proportion to the square root of the time, while the density at the centre drops as the three-halves power to keep the total number constant. The distribution of particles with distance from the centre at three different times is shown in Figure 10—>. From this diagram one may calculate what fraction, after any chosen interval, has moved farther than some chosen distance from the origin. Moreover, since each particle wanders independently of the rest, it also gives the probability that a single particle will migrate farther than this in the same time. Thus a problem relating to the behaviour of a single particle, for which only an average answer can usefully be given, has been converted into a field equation and solved rigorously. This is a widely used technique in physics.

Further examples of field equations
      The equations describing the propagation of waves (electromagnetic, acoustic, deep water waves, and ripples) are discussed in relevant articles, as is the Schrödinger equation for probability waves that governs particle behaviour in quantum mechanics (see below Fundamental constituents of matter (physical science, principles of)). The field equations that embody the special theory of relativity are more elaborate with space and time coordinates no longer independent of each other, though the geometry involved is still Euclidean. In the general theory of relativity, the geometry of this four-dimensional space-time is non-Euclidean (see relativity).

Conservation laws (conservation law) and extremal principles
      It is a consequence of Newton's laws of motion that the total momentum remains constant in a system completely isolated from external influences. The only forces acting on any part of the system are those exerted by other parts; if these are taken in pairs, according to the third law, A exerts on B a force equal and opposite to that of B on A. Since, according to the second law, the momentum of each changes at a rate equal to the force acting on it, the momentum change of A is exactly equal and opposite to that of B when only mutual forces between these two are considered. Because the effects of separate forces are additive, it follows that for the system as a whole no momentum change occurs. The centre of mass of the whole system obeys the first law in remaining at rest or moving at a constant velocity, so long as no external influences are brought to bear. This is the oldest of the conservation laws and is invoked frequently in solving dynamic problems.

      The total angular momentum (also called moment of momentum) of an isolated system about a fixed point is conserved as well. The angular momentum of a particle of mass m moving with velocity v at the instant when it is at a distance r from the fixed point is mrv. The quantity written as rv is a vector (the vector product of r and v) having components with respect to Cartesian axes

      The meaning is more easily appreciated if all the particles lie and move in a plane. The angular momentum of any one particle is the product of its momentum mv and the distance of nearest approach of the particle to the fixed point if it were to continue in a straight line. The vector is drawn normal to the plane. Conservation of total angular momentum does not follow immediately from Newton's laws but demands the additional assumption that any pair of forces, action and reaction, are not only equal and opposite but act along the same line. This is always true for central forces, but it holds also for the frictional force developed along sliding surfaces. If angular momentum were not conserved, one might find an isolated body developing a spontaneous rotation with respect to the distant stars or, if rotating like the Earth, changing its rotational speed without any external cause. Such small changes as the Earth experiences are explicable in terms of disturbances from without—e.g., tidal forces exerted by the Moon. The law of conservation of angular momentum is not called into question.

      Nevertheless, there are noncentral forces in nature, as, for example, when a charged particle moves past a bar magnet. If the line of motion and the axis of the magnet lie in a plane, the magnet exerts a force on the particle perpendicular to the plane while the magnetic field of the moving particle exerts an equal and opposite force on the magnet. At the same time, it exerts a couple tending to twist the magnet out of the plane. Angular momentum is not conserved unless one imagines that the balance of angular momentum is distributed in the space around the magnet and charge and changes as the particle moves past. The required result is neatly expressed by postulating the possible existence of magnetic poles that would generate a magnetic field analogous to the electric field of a charge (a bar magnet behaves roughly like two such poles of opposite sign, one near each end). Then there is associated with each pair, consisting of a charge q and a pole P, angular momentum μ0Pq/4π, as if the electric and magnetic fields together acted like a gyroscope spinning about the line joining P and q. With this contribution included in the sum, angular momentum is always conserved.

      The device of associating mechanical properties with the fields, which up to this point had appeared merely as convenient mathematical constructions, has even greater implications when conservation of energy is considered. This conservation law, which is regarded as basic to physics, seems at first sight, from an atomic point of view, to be almost trivial. If two particles interact by central forces, for which a potential function ϕ may be defined such that grad ϕ gives the magnitude of the force experienced by each, it follows from Newton's laws of motion that the sum of ϕ and of their separate kinetic energies, defined as 1/2mv2, remains constant. This sum is defined to be the total energy of the two particles and, by its definition, is automatically conserved. The argument may be extended to any number of particles interacting by means of central forces; a potential energy function may always be found, depending only on the relative positions of the particles, which may be added to the sum of the kinetic energies (depending only on the velocities) to give a total energy that is conserved.

      The concept of potential energy, thus introduced as a formal device, acquires a more concrete appearance when it is expressed in terms of electric and magnetic field strengths for particles interacting by virtue of their charges. The quantities 1/2ε0Ε2 and B2/2μ0 may be interpreted as the contributions per unit volume of the electric and magnetic fields to the potential energy, and, when these are integrated over all space and added to the kinetic energy, the total energy thus expressed is a conserved quantity. These expressions were discovered during the heyday of ether theories, according to which all space is permeated by a medium capable of transmitting forces between particles (see above). The electric and magnetic fields were interpreted as descriptions of the state of strain of the ether, so that the location of stored energy throughout space was no more remarkable than it would be in a compressed spring. With the abandonment of the ether theories following the rise of relativity theory, this visualizable model ceased to have validity.

Conservation of mass-energy
      The idea of energy as a real constituent of matter has, however, become too deeply rooted to be abandoned lightly, and most physicists find it useful to continue treating electric and magnetic fields as more than mathematical constructions. Far from being empty, free space is viewed as a storehouse for energy, with E and B providing not only an inventory but expressions for its movements as represented by the momentum carried in the fields. Wherever E and B are both present, and not parallel, there is a flux of energy, amounting to EB0, crossing unit area and moving in a direction normal to the plane defined by E and B. This energy in motion confers momentum on the field, EB0c, per unit volume as if there were mass associated with the field energy. Indeed, the English physicist J.J. Thomson showed in 1881 that the energy stored in the fields around a moving charged particle varies as the square of the velocity as if there were extra mass carried with the electric field around the particle. Herein lie the seeds of the general mass–energy relationship (Einstein's mass-energy relation) developed by Einstein in his special theory of relativity; E = mc2 expresses the association of mass with every form of energy. Neither of two separate conservation laws, that of energy and that of mass (mass, conservation of) (the latter particularly the outcome of countless experiments involving chemical change), is in this view perfectly true, but together they constitute a single conservation law, which may be expressed in two equivalent ways—conservation of mass, if to the total energy E is ascribed mass E/c2, or conservation of energy, if to each mass m is ascribed energy mc2. The delicate measurements by Eötvös and later workers (see above) show that the gravitational forces acting on a body do not distinguish different types of mass, whether intrinsic to the fundamental particles or resulting from their kinetic and potential energies. For all its apparently artificial origins, then, this conservation law enshrines a very deep truth about the material universe, one that has not yet been fully explored.

      An equally fundamental law, for which no exception is known, is that the total electrical charge (charge conservation) in an isolated system is conserved. In the production of a negatively charged electron by an energetic gamma ray, for example, a positively charged positron is produced simultaneously. An isolated electron cannot disappear, though an electron and a positron, whose total charge is zero and whose mass is 2me (twice the mass of an electron), may simultaneously be annihilated. The energy equivalent of the destroyed mass appears as gamma ray energy 2mec2.

      For macroscopic systems—i.e., those composed of objects massive enough for their atomic structure to be discounted in the analysis of their behaviour—the conservation law for energy assumes a different aspect. In the collision of two perfectly elastic objects, to which billiard balls are a good approximation, momentum and energy are both conserved. Given the paths and velocities before collision, those after collision can be calculated from the conservation laws alone. In reality, however, although momentum is always conserved, the kinetic energy of the separating balls is less than what they had on approach. Soft objects, indeed, may adhere on collision, losing most of their kinetic energy. The lost energy takes the form of heat, raising the temperature (if only imperceptibly) of the colliding objects. From the atomic viewpoint the total energy of a body may be divided into two portions: on the one hand, the external energy consisting of the potential energy associated with its position and the kinetic energy of motion of its centre of mass and its spin; and, on the other, the internal energy due to the arrangement and motion of its constituent atoms. In an inelastic collision the sum of internal and external energies is conserved, but some of the external energy of bodily motion is irretrievably transformed into internal random motions. The conservation of energy is expressed in the macroscopic language of the first law of thermodynamics—namely, energy is conserved provided that heat is taken into account. The irreversible nature of the transfer from external energy of organized motion to random internal energy is a manifestation of the second law of thermodynamics.

      The irreversible degradation of external energy into random internal energy also explains the tendency of all systems to come to rest if left to themselves. If there is a configuration in which the potential energy is less than for any slightly different configuration, the system may find stable equilibrium here because there is no way in which it can lose more external energy, either potential or kinetic. This is an example of an extremal principle—that a state of stable equilibrium is one in which the potential energy is a minimum with respect to any small changes in configuration. It may be regarded as a special case of one of the most fundamental of physical laws, the principle of increase of entropy, which is a statement of the second law of thermodynamics in the form of an extremal principle—the equilibrium state of an isolated physical system is that in which the entropy takes the maximum possible value. This matter is discussed further below and, in particular, in the article thermodynamics.

Manifestations of the extremal principle
 The earliest extremal principle to survive in modern physics was formulated by the French mathematician Pierre de Fermat (Fermat's principle) in about 1660. As originally stated, the path taken by a ray of light between two fixed points in an arrangement of mirrors, lenses, and so forth, is that which takes the least time. The laws of reflection and refraction (Snell's law) may be deduced from this principle if it is assumed as Fermat did, correctly, that in a medium of refractive index μ light travels more slowly than in free space by a factor μ. Strictly, the time taken along a true ray path is either less or greater than for any neighbouring path. If all paths in the neighbourhood take the same time, the two chosen points are such that light leaving one is focused on the other. The perfect example is exhibited by an elliptical mirror, such as the one in Figure 11—>; all paths from F1 to the ellipse and thence to F2 have the same length. In conventional optical terms, the ellipse has the property that every choice of paths obeys the law of reflection, and every ray from F1 converges after reflection onto F2. Also shown in the figure are two reflecting surfaces tangential to the ellipse that do not have the correct curvature to focus light from F1 onto F2. A ray is reflected from F1 to F2 only at the point of contact. For the flat reflector the path taken is the shortest of all in the vicinity, while for the reflector that is more strongly curved than the ellipse it is the longest. Fermat's principle and its application to focusing by mirrors and lenses finds a natural explanation in the wave theory of light (see light: Basic concepts of wave theory (light)).

      A similar extremal principle in mechanics, the principle of least action, was proposed by the French mathematician and astronomer Pierre-Louis Moreau de Maupertuis but rigorously stated only much later, especially by the Irish mathematician and scientist William Rowan Hamilton in 1835. Though very general, it is well enough illustrated by a simple example, the path taken by a particle between two points A and B in a region where the potential ϕ(r) is everywhere defined. Once the total energy E of the particle has been fixed, its kinetic energy T at any point P is the difference between E and the potential energy ϕ at P. If any path between A and B is assumed to be followed, the velocity at each point may be calculated from T, and hence the time t between the moment of departure from A and passage through P. The action for this path is found by evaluating the integral ∫BA (T - ϕ)dt, and the actual path taken by the particle is that for which the action is minimal. It may be remarked that both Fermat and Maupertuis were guided by Aristotelian notions of economy in nature that have been found, if not actively misleading, too imprecise to retain a place in modern science.

      Fermat's and Hamilton's principles are but two examples out of many whereby a procedure is established for finding the correct solution to a problem by discovering under what conditions a certain function takes an extremal value. The advantages of such an approach are that it brings into play the powerful mathematical techniques of the calculus of variations and, perhaps even more important, that in dealing with very complex situations it may allow a systematic approach by computational means to a solution that may not be exact but is near enough the right answer to be useful.

      Fermat's principle, stated as a theorem concerning light rays but later restated in terms of the wave theory, found an almost exact parallel in the development of wave mechanics. The association of a wave with a particle by the physicists Louis-Victor de Broglie and Erwin Schrödinger was made in such a way that the principle of least action followed by an analogous argument.

Fundamental constituents of matter
Development of the atomic theory (atomic theory)
      The idea that matter is composed of atoms (atom) goes back to the Greek philosophers, notably Democritus, and has never since been entirely lost sight of, though there have been periods when alternative views were more generally preferred. Newton's contemporaries, Robert Hooke (Hooke, Robert) and Robert Boyle (Boyle, Robert), in particular, were atomists, but their interpretation of the sensation of heat as random motion of atoms was overshadowed for more than a century by the conception of heat as a subtle fluid dubbed caloric. It is a tribute to the strength of caloric theory that it enabled the French scientist Sadi Carnot (Carnot, Sadi) to arrive at his great discoveries in thermodynamics. In the end, however, the numerical rules for the chemical combination of different simple substances, together with the experiments on the conversion of work into heat by Benjamin Thompson (Thompson, Sir Benjamin, Count Von Rumford) (Count Rumford) and James Prescott Joule (Joule, James Prescott), led to the downfall of the theory of caloric. Nevertheless, the rise of ether theories to explain the transmission of light and electromagnetic forces through apparently empty space postponed for many decades the general reacceptance of the concept of atoms. The discovery in 1858 by the German scientist and philosopher Hermann von Helmholtz (Helmholtz, Hermann von) of the permanence of vortex motions in perfectly inviscid fluids encouraged the invention—throughout the latter half of the 19th century and especially in Great Britain—of models in which vortices in a structureless ether played the part otherwise assigned to atoms. In recent years the recognition that certain localized disturbances in a fluid, the so-called solitary waves, might persist for a very long time has led to attempts, so far unsuccessful, to use them as models of fundamental particles.

      These attempts to describe the basic constituents of matter in the familiar language of fluid mechanics were at least atomic theories in contrast to the anti-atomistic movement at the end of the 19th century in Germany under the influence of Ernst Mach (Mach, Ernst) and Wilhelm Ostwald (Ostwald, Wilhelm). For all their scientific eminence, their argument was philosophical rather than scientific, springing as it did from the conviction that the highest aim of science is to describe the relationship between different sensory perceptions without the introduction of unobservable concepts. Nonetheless, an inspection of the success of their contemporaries using atomic models shows why this movement failed. It suffices to mention the systematic construction of a kinetic theory of matter in which the physicists Ludwig Boltzmann of Austria and J. Willard Gibbs of the United States were the two leading figures. To this may be added Hendrik Lorentz's electron theory, which explained in satisfying detail many of the electrical properties of matter; and, as a crushing argument for atomism, the discovery and explanation of X-ray diffraction by Max von Laue of Germany and his collaborators, a discovery that was quickly developed, following the lead of the British physicist William Henry Bragg and his son Lawrence, into a systematic technique for mapping the precise atomic structure of crystals.

      While the concept of atoms was thus being made indispensable, the ancient belief that they were probably structureless and certainly indestructible came under devastating attack. J.J. Thomson's (Thomson, Sir J.J.) discovery of the electron in 1897 soon led to the realization that the mass of an atom largely resides in a positively charged part, electrically neutralized by a cloud of much lighter electrons. A few years later Ernest Rutherford and Frederick Soddy showed how the emission of alpha (alpha particle) and beta particles (beta particle) from radioactive elements causes them to be transformed into elements of different chemical properties. By 1913, with Rutherford as the leading figure, the foundations of the modern theory of atomic structure were laid. It was determined that a small, massive nucleus carries all the positive charge whose magnitude, expressed as a multiple of the fundamental charge of the proton, is the atomic number. An equal number of electrons carrying a negative charge numerically equal to that of the proton form a cloud whose diameter is several thousand times that of the nucleus around which they swarm. The atomic number determines the chemical properties of the atom, and in alpha decay a helium nucleus, whose atomic number is 2, is emitted from the radioactive nucleus, leaving one whose atomic number is reduced by 2. In beta decay the nucleus in effect gains one positive charge by emitting a negative electron and thus has its atomic number increased by unity.

      The nucleus, itself a composite body, was soon being described in various ways, none completely wrong but none uniquely right. Pivotal was James Chadwick's (Chadwick, Sir James) discovery in 1932 of the neutron, a nuclear particle with very nearly the same mass as the proton but no electric charge. After this discovery, investigators came to view the nucleus as consisting of protons (proton) and neutrons, bound together by a force of limited range, which at close quarters was strong enough to overcome the electrical repulsion between the protons. A free neutron survives for only a few minutes before disintegrating into a readily observed proton and electron, along with an elusive neutrino, which has no charge and zero, or at most extremely small, mass. The disintegration of a neutron also may occur inside the nucleus, with the expulsion of the electron and neutrino; this is the beta-decay process. It is common enough among the heavy radioactive nuclei but does not occur with all nuclei because the energy released would be insufficient for the reorganization of the resulting nucleus. Certain nuclei have a higher-than-ideal ratio of protons to neutrons and may adjust the proportion by the reverse process, a proton being converted into a neutron with the expulsion of a positron and an antineutrino. For example, a magnesium nucleus containing 12 protons and 11 neutrons spontaneously changes to a stable sodium nucleus with 11 protons and 12 neutrons. The positron resembles the electron in all respects except for being positively rather than negatively charged. It was the first antiparticle to be discovered. Its existence had been predicted, however, by Dirac after he had formulated the quantum mechanical equations describing the behaviour of an electron (see below). This was one of the most spectacular achievements of a spectacular albeit brief epoch, during which the basic conceptions of physics were revolutionized.

Rise of quantum mechanics (quantum mechanics)
      The idea of the quantum was introduced by the German physicist Max Planck (Planck, Max) in 1900 in response to the problems posed by the spectrum of radiation from a hot body, but the development of quantum theory soon became closely tied to the difficulty of explaining by classical mechanics the stability of Rutherford's nuclear atom. Bohr led the way in 1913 with his model of the hydrogen atom (Bohr atomic model), but it was not until 1925 that the arbitrary postulates of his quantum theory found consistent expression in the new quantum mechanics that was formulated in apparently different but in fact equivalent ways by Heisenberg, Schrödinger, and Dirac (see quantum mechanics). In Bohr's model the motion of the electron around the proton was analyzed as if it were a classical problem, mathematically the same as that of a planet around the Sun, but it was additionally postulated that, of all the orbits available to the classical particle, only a discrete set was to be allowed, and Bohr devised rules for determining which orbits they were. In Schrödinger's (Schrödinger, Erwin) wave mechanics the problem is also written down in the first place as if it were a classical problem, but, instead of proceeding to a solution of the orbital motion, the equation is transformed by an explicitly laid down procedure from an equation of particle motion to an equation of wave motion. The newly introduced mathematical function Ψ, the amplitude of Schrödinger's hypothetical wave, is used to calculate not how the electron moves but rather what the probability is of finding the electron in any specific place if it is looked for there.

      Schrödinger's prescription reproduced in the solutions of the wave equation the postulates of Bohr but went much further. Bohr's theory had come to grief when even two electrons, as in the helium atom, had to be considered together, but the new quantum mechanics encountered no problems in formulating the equations for two or any number of electrons moving around a nucleus. Solving the equations was another matter; yet, numerical procedures were applied with devoted patience to a few of the simpler cases and demonstrated beyond cavil that the only obstacle to solution was calculational and not an error of physical principle. Modern computers have vastly extended the range of application of quantum mechanics not only to heavier atoms but also to molecules and assemblies of atoms in solids, and always with such success as to inspire full confidence in the prescription.

      From time to time many physicists feel uneasy that it is necessary first to write down the problem to be solved as though it were a classical problem and them to subject it to an artificial transformation into a problem in quantum mechanics. It must be realized, however, that the world of experience and observation is not the world of electrons and nuclei. When a bright spot on a television screen is interpreted as the arrival of a stream of electrons, it is still only the bright spot that is perceived and not the electrons. The world of experience is described by the physicist in terms of visible objects, occupying definite positions at definite instants of time—in a word, the world of classical mechanics. When the atom is pictured as a nucleus surrounded by electrons, this picture is a necessary concession to human limitations; there is no sense in which one can say that, if only a good enough microscope were available, this picture would be revealed as genuine reality. It is not that such a microscope has not been made; it is actually impossible to make one that will reveal this detail. The process of transformation from a classical description to an equation of quantum mechanics, and from the solution of this equation to the probability that a specified experiment will yield a specified observation, is not to be thought of as a temporary expedient pending the development of a better theory. It is better to accept this process as a technique for predicting the observations that are likely to follow from an earlier set of observations. Whether electrons and nuclei have an objective existence in reality is a metaphysical question to which no definite answer can be given. There is, however, no doubt that to postulate their existence is, in the present state of physics, an inescapable necessity if a consistent theory is to be constructed to describe economically and exactly the enormous variety of observations on the behaviour of matter. The habitual use of the language of particles by physicists induces and reflects the conviction that, even if the particles elude direct observation, they are as real as any everyday object.

      Following the initial triumphs of quantum mechanics, Dirac (Dirac, P.A.M.) in 1928 extended the theory so that it would be compatible with the special theory of relativity. Among the new and experimentally verified results arising from this work was the seemingly meaningless possibility that an electron of mass m might exist with any negative energy between -mc2 and -∞. Between -mc2 and +mc2, which is in relativistic theory the energy of an electron at rest, no state is possible. It became clear that other predictions of the theory would not agree with experiment if the negative-energy states were brushed aside as an artifact of the theory without physical significance. Eventually Dirac was led to propose that all the states of negative energy, infinite in number, are already occupied with electrons and that these, filling all space evenly, are imperceptible. If, however, one of the negative-energy electrons is given more than 2mc2 of energy, it can be raised into a positive-energy state, and the hole it leaves behind will be perceived as an electron-like particle, though carrying a positive charge. Thus this act of excitation leads to the simultaneous appearance of a pair of particles (pair production)—an ordinary negative electron and a positively charged but otherwise identical positron. This process was observed in cloud-chamber photographs by Carl David Anderson (Anderson, Carl David) of the United States in 1932. The reverse process was recognized at the same time; it can be visualized either as an electron and a positron mutually annihilating (annihilation) one another, with all their energy (two lots of rest energy, each mc2, plus their kinetic energy) being converted into gamma rays (gamma ray) (electromagnetic quanta), or as an electron losing all this energy as it drops into the vacant negative-energy state that simulates a positive charge. When an exceptionally energetic cosmic-ray particle enters the Earth's atmosphere, it initiates a chain of such processes in which gamma rays generate electron–positron pairs; these in turn emit gamma rays which, though of lower energy, are still capable of creating more pairs, so that what reaches the Earth's surface is a shower of many millions of electrons and positrons.

      Not unnaturally, the suggestion that space was filled to infinite density with unobservable particles was not easily accepted in spite of the obvious successes of the theory. It would have seemed even more outrageous had not other developments already forced theoretical physicists to contemplate abandoning the idea of empty space. Quantum mechanics carries the implication that no oscillatory system can lose all its energy; there must always remain at least a “zero-point energy” (zero-point energy) amounting to hν/2 for an oscillator with natural frequency ν (h is Planck's constant). This also seemed to be required for the electromagnetic oscillations constituting radio waves, light, X rays, and gamma rays. Since there is no known limit to the frequency ν, their total zero-point energy density is also infinite; like the negative-energy electron states, it is uniformly distributed throughout space, both inside and outside matter, and presumed to produce no observable effects.

Developments in particle physics
      It was at about this moment, say 1930, in the history of the physics of fundamental particles that serious attempts to visualize the processes in terms of everyday notions were abandoned in favour of mathematical formalisms. Instead of seeking modified procedures from which the awkward, unobservable infinities had been banished, the thrust was toward devising prescriptions for calculating what observable processes could occur and how frequently and how quickly they would occur. An empty cavity which would be described by a classical physicist as capable of maintaining electromagnetic waves of various frequencies, ν, and arbitrary amplitude now remains empty (zero-point oscillation being set aside as irrelevant) except insofar as photons, of energy hν, are excited within it. Certain mathematical operators have the power to convert the description of the assembly of photons into the description of a new assembly, the same as the first except for the addition or removal of one. These are called creation or annihilation operators, and it need not be emphasized that the operations are performed on paper and in no way describe a laboratory operation having the same ultimate effect. They serve, however, to express such physical phenomena as the emission of a photon from an atom when it makes a transition to a state of lower energy. The development of these techniques, especially after their supplementation with the procedure of renormalization (which systematically removes from consideration various infinite energies that naive physical models throw up with embarrassing abundance), has resulted in a rigorously defined procedure that has had dramatic successes in predicting numerical results in close agreement with experiment. It is sufficient to cite the example of the magnetic moment of the electron. According to Dirac's relativistic theory, the electron should possess a magnetic moment whose strength he predicted to be exactly one Bohr magneton (eh/4πm, or 9.27 × 10-24 joule per tesla). In practice, this has been found to be not quite right, as, for instance, in the experiment of Lamb and Rutherford mentioned earlier; more recent determinations give 1.0011596522 Bohr magnetons. Calculations by means of the theory of quantum electrodynamics give 1.0011596525 in impressive agreement.

      This account represents the state of the theory in about 1950, when it was still primarily concerned with problems related to the stable fundamental particles, the electron and the proton, and their interaction with electromagnetic fields. Meanwhile, studies of cosmic radiation at high altitudes—those conducted on mountains or involving the use of balloon-borne photographic plates—had revealed the existence of the pi-meson (pion), a particle 273 times as massive as the electron, which disintegrates into the mu-meson (muon) (muon), 207 times as massive as the electron, and a neutrino. Each muon in turn disintegrates into an electron and two neutrinos. The pion has been identified with the hypothetical particle postulated in 1935 by the Japanese physicist Yukawa Hideki as the particle that serves to bind protons and neutrons in the nucleus. Many more unstable particles have been discovered in recent years. Some of them, just as in the case of the pion and the muon, are lighter than the proton, but many are more massive. An account of such particles is given in the article subatomic particle.

      The term particle (subatomic particle) is firmly embedded in the language of physics, yet a precise definition has become harder as more is learned. When examining the tracks in a cloud-chamber or bubble-chamber photograph, one can hardly suspend disbelief in their having been caused by the passage of a small charged object. However, the combination of particle-like and wavelike properties in quantum mechanics is unlike anything in ordinary experience, and, as soon as one attempts to describe in terms of quantum mechanics the behaviour of a group of identical particles (e.g., the electrons in an atom), the problem of visualizing them in concrete terms becomes still more intractable. And this is before one has even tried to include in the picture the unstable particles or to describe the properties of a stable particle like the proton in relation to quarks. These hypothetical entities, worthy of the name particle to the theoretical physicist, are apparently not to be detected in isolation, nor does the mathematics of their behaviour encourage any picture of the proton as a molecule-like composite body constructed of quarks. Similarly, the theory of the muon is not the theory of an object composed, as the word is normally used, of an electron and two neutrinos. The theory does, however, incorporate such features of particle-like behaviour as will account for the observation of the track of a muon coming to an end and that of an electron starting from the end point. At the heart of all fundamental theories is the concept of countability. If a certain number of particles is known to be present inside a certain space, that number will be found there later, unless some have escaped (in which case they could have been detected and counted) or turned into other particles (in which case the change in composition is precisely defined). It is this property, above all, that allows the idea of particles to be preserved.

      Undoubtedly, however, the term is being strained when it is applied to photons (photon) that can disappear with nothing to show but thermal energy or be generated without limit by a hot body so long as there is energy available. They are a convenience for discussing the properties of a quantized electromagnetic field, so much so that the condensed-matter physicist refers to the analogous quantized elastic vibrations of a solid as phonons (phonon) without persuading himself that a solid really consists of an empty box with particle-like phonons running about inside. If, however, one is encouraged by this example to abandon belief in photons as physical particles, it is far from clear why the fundamental particles should be treated as significantly more real, and, if a question mark hangs over the existence of electrons and protons, where does one stand with atoms or molecules? The physics of fundamental particles does indeed pose basic metaphysical questions to which neither philosophy nor physics has answers. Nevertheless, the physicist has confidence that his constructs and the mathematical processes for manipulating them represent a technique for correlating the outcomes of observation and experiment with such precision and over so wide a range of phenomena that he can afford to postpone deeper inquiry into the ultimate reality of the material world.

Simplicity and complexity
      The search for fundamental particles and the mathematical formalism with which to describe their motions and interactions has in common with the search for the laws governing gravitational, electromagnetic, and other fields of force the aim of finding the most economical basis from which, in principle, theories of all other material processes may be derived. Some of these processes are simple—a single particle moving in a given field of force, for example—if the term refers to the nature of the system studied and not to the mathematical equipment that may sometimes be brought to bear. A complex process, on the other hand, is typically one in which many interacting particles are involved and for which it is hardly ever possible to proceed to a complete mathematical solution. A computer may be able to follow in detail the movement of thousands of atoms interacting in a specified way, but a wholly successful study along these lines does no more than display on a large scale and at an assimilable speed what nature achieves on its own. Much can be learned from these studies, but, if one is primarily concerned with discovering what will happen in given circumstances, it is frequently quicker and cheaper to do the experiment than to model it on a computer. In any case, computer modeling of quantum mechanical, as distinct from Newtonian, behaviour becomes extremely complicated as soon as more than a few particles are involved.

      The art of analyzing complex systems is that of finding the means to extract from theory no more information than one needs. It is normally of no value to discover the speed of a given molecule in a gas at a given moment; it is, however, very valuable to know what fraction of the molecules possess a given speed. The correct answer to this question was found by Maxwell, whose argument was ingenious and plausible. More rigorously, Boltzmann (Boltzmann, Ludwig Eduard) showed that it is possible to proceed from the conservation laws governing molecular encounters to general statements, such as the distribution of velocities, which are largely independent of how the molecules interact. In thus laying the foundations of statistical mechanics, Boltzmann provided an object lesson in how to avoid recourse to the fundamental laws, replacing them with a new set of rules appropriate to highly complex systems. This point is discussed further in Entropy and disorder (physical science, principles of) below.

      The example of statistical mechanics is but one of many that together build up a hierarchical structure of simplified models whose function is to make practicable the analysis of systems at various levels of complexity. Ideally, the logical relationship between each successive pair of levels should be established so that the analyst may have confidence that the methods he applies to his special problem are buttressed by the enormous corpus of fact and theory that comprises physical knowledge at all levels. It is not in the nature of the subject for every connection to be proved with mathematical rigour, but, where this is lacking, experiment will frequently indicate what trust may be placed in the intuitive steps of the argument.

      For instance, it is out of the question to solve completely the quantum mechanical problem of finding the stationary states in which an atomic nucleus containing perhaps 50 protons or neutrons can exist. Nevertheless, the energy of these states can be measured and models devised in which details of particle position are replaced by averages, such that when the simplified model is treated by the methods of quantum mechanics the measured energy levels emerge from the calculations. Success is attained when the rules for setting up the model are found to give the right result for every nucleus. Similar models had been devised earlier by the English physicist Douglas R. Hartree to describe the cloud of electrons around the nucleus. The increase in computing power made it feasible to add extra details to the model so that it agreed even better with the measured properties of atoms. It is worth noting that when the extranuclear electrons are under consideration it is frequently unnecessary to refer to details of the nucleus, which might just as well be a point charge; even if this is too simplistic, a small number of extra facts usually suffices. In the same way, when the atoms combine chemically and molecules in a gas or a condensed state interact, most of the details of electronic structure within the atom are irrelevant or can be included in the calculation by introducing a few extra parameters; these are often treated as empirical properties. Thus the degree to which an atom is distorted by an electric field is often a significant factor in its behaviour, and the investigator dealing with the properties of assemblies of atoms may prefer to use the measured value rather than the atomic theorist's calculation of what it should be. However, he knows that enough of these calculations have been successfully carried out for his use of measured values in any specific case to be a time-saver rather than a denial of the validity of his model.

      These examples from atomic physics can be multiplied at all levels so that a connected hierarchy exists, ranging from fundamental particles and fields, through atoms and molecules, to gases, liquids, and solids that were studied in detail and reduced to quantitative order well before the rise of atomic theory. Beyond this level lie the realms of the Earth sciences, the planetary systems, the interior of stars, galaxies, and the Cosmos as a whole. And with the interior of stars and the hypothetical early universe, the entire range of models must be brought to bear if one is to understand how the chemical elements were built up or to determine what sort of motions are possible in the unimaginably dense, condensed state of neutron stars.

      The following sections make no attempt to explore all aspects and interconnections of complex material systems, but they highlight a few ideas which pervade the field and which indicate the existence of principles that find little place in the fundamental laws yet are the outcome of their operation.

 The normal behaviour of a gas on cooling is to condense into a liquid and then into a solid, though the liquid phase may be left out if the gas starts at a low enough pressure. The solid phase of a pure substance is usually crystalline (crystal), having the atoms or molecules arranged in a regular pattern so that a suitable small sample may define the whole. The unit cell is the smallest block out of which the pattern can be formed by stacking replicas. The checkerboard in Figure 12—> illustrates the idea; here the unit cell has been chosen out of many possibilities to contain one white square and one black, dissected into quarters. For crystals, of course, the unit cell is three-dimensional. A very wide variety of arrangements is exhibited by different substances, and it is the great triumph of X-ray crystallography to have provided the means for determining experimentally what arrangement is involved in each case.

      One may ask whether mathematical techniques exist for deducing the correct result independently of experiment, and the answer is almost always no. An individual sulfur atom (sulfur), for example, has no features that reflect its preference, in the company of others, for forming rings of eight. This characteristic can only be discovered theoretically by calculating the total energy of different-sized rings and of other patterns and determining after much computation that the ring of eight has the lowest energy of all. Even then the investigator has no assurance that there is no other arrangement which confers still lower energy. In one of the forms taken by solid sulfur, the unit cell contains 128 atoms in a complex of rings. It would be an inspired guess to hit on this fact without the aid of X rays or the expertise of chemists, and mathematics provides no systematic procedure as an alternative to guessing or relying on experiment.

      Nevertheless, it may be possible in simpler cases to show that calculations of the energy are in accord with the observed crystal forms. Thus, when silicon is strongly compressed, it passes through a succession of different crystal modifications for each of which the variation with pressure of the energy can be calculated. The pressure at which a given change of crystal form takes place is that at which the energy takes the same value for both modifications involved. As this pressure is reached, one gives way to the other for the possession of the lower energy. The fact that the calculation correctly describes not only the order in which the different forms occur but also the pressures at which the changeovers take place indicates that the physical theory is in good shape; only the power is lacking in the mathematics to predict behaviour from first principles.

      The changes in symmetry that occur at the critical points (critical point) where one modification changes to another are complex examples of a widespread phenomenon for which simple analogues exist. A perfectly straight metal strip, firmly fixed to a base so that it stands perfectly upright, remains straight as an increasing load is placed on its upper end until a critical load is reached. Any further load causes the strip to heel over and assume a bent form, and it only takes a minute disturbance to determine whether it will bend to the left or to the right. The fact that either outcome is equally likely reflects the left–right symmetry of the arrangement, but once the choice is made the symmetry is broken. The subsequent response to changing load and the small vibrations executed when the strip is struck lightly are characteristic of the new unsymmetrical shape. If one wishes to calculate the behaviour, it is essential to avoid assuming that an arrangement will always remain symmetrical simply because it was initially so. In general, as with the condensation of sulfur atoms or with the crystalline transitions in silicon, the symmetry implicit in the formulation of the theory will be maintained only in the totality of possible solutions, not necessarily in the particular solution that appears in practice. In the case of the condensation of a crystal from individual atoms, the spherical symmetry of each atom tells one no more than that the crystal may be formed equally well with its axis pointing in any direction; and such information provides no help in finding the crystal structure. In general, there is no substitute for experiment. Even with relatively simple systems such as engineering structures, it is all too easy to overlook the possibility of symmetry breaking leading to calamitous failure.

      It should not be assumed that the critical behaviour of a loaded strip depends on its being perfectly straight. If the strip is not, it is likely to prefer one direction of bending to the other. As the load is increased, so will the intrinsic bend be exaggerated, and there will be no critical point at which a sudden change occurs. By tilting the base, however, it is possible to compensate for the initial imperfection and to find once more a position where left and right are equally favoured. Then the critical behaviour is restored, and at a certain load the necessity of choice is present as with a perfect strip. The study of this and numerous more complex examples is the province of the so-called catastrophe theory. A catastrophe, in the special sense used here, is a situation in which a continuously varying input to a system gives rise to a discontinuous change in the response at a critical point. The discontinuities may take many forms, and their character may be sensitive in different ways to small changes in the parameters of the system. Catastrophe theory is the term used to describe the systematic classification, by means of topological mathematics, of these discontinuities. Wide-ranging though the theory may be, it cannot at present include in its scope most of the symmetry-breaking transitions undergone by crystals.

Entropy and disorder (entropy)
      As is explained in detail in the article thermodynamics, the laws of thermodynamics make possible the characterization of a given sample of matter—after it has settled down to equilibrium with all parts at the same temperature—by ascribing numerical measures to a small number of properties (pressure, volume, energy, and so forth). One of these is entropy. As the temperature of the body is raised by adding heat, its entropy as well as its energy is increased. On the other hand, when a volume of gas enclosed in an insulated cylinder is compressed by pushing on the piston, the energy in the gas increases while the entropy stays the same or, usually, increases a little. In atomic terms, the total energy is the sum of all the kinetic and potential energies of the atoms, and the entropy, it is commonly asserted, is a measure of the disorderly state of the constituent atoms. The heating of a crystalline solid until it melts and then vaporizes is a progress from a well-ordered, low-entropy state to a disordered, high-entropy state. The principal deduction from the second law of thermodynamics (or, as some prefer, the actual statement of the law) is that, when an isolated system makes a transition from one state to another, its entropy can never decrease. If a beaker of water with a lump of sodium on a shelf above it is sealed in a thermally insulated container and the sodium is then shaken off the shelf, the system, after a period of great agitation, subsides to a new state in which the beaker contains hot sodium hydroxide solution. The entropy of the resulting state is higher than the initial state, as can be demonstrated quantitatively by suitable measurements.

      The idea that a system cannot spontaneously become better ordered but can readily become more disordered, even if left to itself, appeals to one's experience of domestic economy and confers plausibility on the law of increase of entropy. As far as it goes, there is much truth in this naive view of things, but it cannot be pursued beyond this point without a much more precise definition of disorder. Thermodynamic entropy is a numerical measure that can be assigned to a given body by experiment; unless disorder can be defined with equal precision, the relation between the two remains too vague to serve as a basis for deduction. A precise definition is to be found by considering the number, labeled W, of different arrangements that can be taken up by a given collection of atoms, subject to their total energy being fixed. In quantum mechanics, W is the number of different quantum states that are available to the atoms with this total energy (strictly, in a very narrow range of energies). It is so vast for objects of everyday size as to be beyond visualization; for the helium atoms contained in one cubic centimetre of gas at atmospheric pressure and at 0° C the number of different quantum states can be written as 1 followed by 170 million million million zeroes (written out, the zeroes would fill nearly one trillion sets of the Encyclopædia Britannica).

      The science of statistical mechanics, as founded by the aforementioned Ludwig Boltzmann and J. Willard Gibbs, relates the behaviour of a multitude of atoms to the thermal properties of the material they constitute. Boltzmann and Gibbs, along with Max Planck, established that the entropy, S, as derived through the second law of thermodynamics, is related to W by the formula S = k ln W, where k is the Boltzmann constant (1.380662 × 10-23 joule per kelvin) and ln W is the natural (Naperian) logarithm of W. By means of this and related formulas it is possible in principle, starting with the quantum mechanics of the constituent atoms, to calculate the measurable thermal properties of the material. Unfortunately, there are rather few systems for which the quantum mechanical problems succumb to mathematical analysis, but among these are gases and many solids, enough to validate the theoretical procedures linking laboratory observations to atomic constitution.

      When a gas is thermally isolated and slowly compressed, the individual quantum states change their character and become mixed together, but the total number W does not alter. In this change, called adiabatic (adiabatic process), entropy remains constant. On the other hand, if a vessel is divided by a partition, one side of which is filled with gas while the other side is evacuated, piercing the partition to allow the gas to spread throughout the vessel greatly increases the number of states available so that W and the entropy rise. The act of piercing requires little effort and may even happen spontaneously through corrosion. To reverse the process, waiting for the gas to accumulate accidentally on one side and then stopping the leak, would mean waiting for a time compared to which the age of the universe would be imperceptibly short. The chance of finding an observable decrease in entropy for an isolated system can be ruled out.

      This does not mean that a part of a system may not decrease in entropy at the expense of at least as great an increase in the rest of the system. Such processes are indeed commonplace but only when the system as a whole is not in thermal equilibrium. Whenever the atmosphere becomes supersaturated with water and condenses into a cloud, the entropy per molecule of water in the droplets is less than it was prior to condensation. The remaining atmosphere is slightly warmed and has a higher entropy. The spontaneous appearance of order is especially obvious when the water vapour condenses into snow crystals. A domestic refrigerator lowers the entropy of its contents while increasing that of its surroundings. Most important of all, the state of nonequilibrium of the Earth irradiated by the much hotter Sun provides an environment in which the cells of plants and animals may build order—i.e., lower their local entropy at the expense of their environment. The Sun provides a motive power that is analogous (though much more complex in detailed operation) to the electric cable connected to the refrigerator. There is no evidence pointing to any ability on the part of living matter to run counter to the principle of increasing (overall) disorder as formulated in the second law of thermodynamics.

 The irreversible tendency toward disorder provides a sense of direction for time which is absent from space. One may traverse a path between two points in space without feeling that the reverse journey is forbidden by physical laws. The same is not true for time travel, and yet the equations of motion, whether in Newtonian or quantum mechanics, have no such built-in irreversibility. A motion picture of a large number of particles interacting with one another looks equally plausible whether run forward or backward. To illustrate and resolve this paradox it is convenient to return to the example of a gas enclosed in a vessel divided by a pierced partition. This time, however, only 100 atoms are involved (not 3 × 1019 as in one cubic centimetre of helium), and the hole is made so small that atoms pass through only rarely and no more than one at a time. This model is easily simulated on a computer, and Figure 13—> shows a typical sequence during which there are 500 transfers of atoms across the partition. The number on one side starts at the mean of 50 and fluctuates randomly while not deviating greatly from the mean. Where the fluctuations are larger than usual, as indicated by the arrows, there is no systematic tendency for their growth to the peak to differ in form from the decay from it. This is in accord with the reversibility of the motions when examined in detail.

      If one were to follow the fluctuations for a very long time and single out those rare occasions when a particular number occurred that was considerably greater than 50, say 75, one would find that the next number is more likely to be 74 than 76. Such would be the case because, if there are 75 atoms on one side of the partition, there will be only 25 on the other, and it is three times more likely that one atom will leave the 75 than that one will be gained from the 25. Also, since the detailed motions are reversible, it is three times more likely that the 75 was preceded by a 74 rather than by a 76. In other words, if one finds the system in a state that is far from the mean, it is highly probable that the system has just managed to get there and is on the point of falling back. If the system has momentarily fluctuated into a state of lower entropy, the entropy will be found to increase again immediately.

      It might be thought that this argument has already conceded the possibility of entropy decreasing. It has indeed, but only for a system on the minute scale of 100 atoms. The same computation carried out for 3 × 1019 atoms would show that one would have to wait interminably (i.e., enormously longer than the age of the universe) for the number on one side to fluctuate even by as little as one part per million. A physical system as big as the Earth, let alone the entire Galaxy—if set up in thermodynamic equilibrium and given unending time in which to evolve—might eventually have suffered such a huge fluctuation that the condition known today could have come about spontaneously. In that case man would find himself, as he does, in a universe of increasing entropy as the fluctuation recedes. Boltzmann, it seems, was prepared to take this argument seriously on the grounds that sentient creatures could only appear as the aftermath of a large enough fluctuation. What happened during the inconceivably prolonged waiting period is irrelevant. Modern cosmology shows, however, that the universe is ordered on a scale enormously greater than is needed for living creatures to evolve, and Boltzmann's hypothesis is correspondingly rendered improbable in the highest degree. Whatever started the universe in a state from which it could evolve with an increase of entropy, it was not a simple fluctuation from equilibrium. The sensation of time's arrow is thus referred back to the creation of the universe, an act that lies beyond the scrutiny of the physical scientist.

 It is possible, however, that in the course of time the universe will suffer “heat death,” having attained a condition of maximum entropy, after which tiny fluctuations are all that will happen. If so, these will be reversible, like the graph of Figure 13—>, and will give no indication of a direction of time. Yet, because this undifferentiated cosmic soup will be devoid of structures necessary for consciousness, the sense of time will in any case have vanished long since.

Chaos (chaos theory)
      Many systems can be described in terms of a small number of parameters and behave in a highly predictable manner. Were this not the case, the laws of physics might never have been elucidated. If one maintains the swing of a pendulum by tapping it at regular intervals, say once per swing, it will eventually settle down to a regular oscillation. Now let it be jolted out of its regularity; in due course it will revert to its previous oscillation as if nothing had disturbed it. Systems that respond in this well-behaved manner have been studied extensively and have frequently been taken to define the norm, from which departures are somewhat unusual. It is with such departures that this section is concerned.

      An example not unlike the periodically struck pendulum is provided by a ball bouncing repeatedly in a vertical line on a base plate that is caused to vibrate up and down to counteract dissipation and maintain the bounce. With a small but sufficient amplitude of base motion the ball synchronizes with the plate, returning regularly once per cycle of vibration. With larger amplitudes the ball bounces higher but still manages to remain synchronized until eventually this becomes impossible. Two alternatives may then occur: (1) the ball may switch to a new synchronized mode in which it bounces so much higher that it returns only every two, three, or more cycles, or (2) it may become unsynchronized and return at irregular, apparently random, intervals. Yet, the behaviour is not random in the way that raindrops strike a small area of surface at irregular intervals. The arrival of a raindrop allows one to make no prediction of when the next will arrive; the best one can hope for is a statement that there is half a chance that the next will arrive before the lapse of a certain time. By contrast, the bouncing ball is described by a rather simple set of differential equations that can be solved to predict without fail when the next bounce will occur and how fast the ball will be moving on impact, given the time of the last bounce and the speed of that impact. In other words, the system is precisely determinate, yet to the casual observer it is devoid of regularity. Systems that are determinate but irregular in this sense are called chaotic; like so many other scientific terms, this is a technical expression that bears no necessary relation to the word's common usage.

      The coexistence of irregularity with strict determinism can be illustrated by an arithmetic example, one that lay behind some of the more fruitful early work in the study of chaos, particularly by the physicist Mitchell J. Feigenbaum following an inspiring exposition by Robert M. May. Suppose one constructs a sequence of numbers starting with an arbitrarily chosen x0 (between 0 and 1) and writes the next in the sequence, x1, as Ax0(1 − x0); proceeding in the same way to x2 = Ax1(1 - x1), one can continue indefinitely, and the sequence is completely determined by the initial value x0 and the value chosen for A. Thus, starting from x0 = 0.9 with A = 2, the sequence rapidly settles to a constant value: 0.09, 0.18, 0.2952, 0.4161, 0.4859, 0.4996, 0.5000, 0.5000, and so forth.

      When A lies between 2 and 3, it also settles to a constant but takes longer to do so. It is when A is increased above 3 that the sequence shows more unexpected features. At first, until A reaches 3.42, the final pattern is an alternation of two numbers, but with further small increments of A it changes to a cycle of 4, followed by 8, 16, and so forth at ever-closer intervals of A. By the time A reaches 3.57, the length of the cycle has grown beyond bounds—it shows no periodicity however long one continues the sequence. This is the most elementary example of chaos, but it is easy to construct other formulas for generating number sequences that can be studied rapidly with the aid of the smallest programmable computer. By such “experimental arithmetic” Feigenbaum found that the transition from regular convergence through cycles of 2, 4, 8, and so forth to chaotic sequences followed strikingly similar courses for all, and he gave an explanation that involved great subtlety of argument and was almost rigorous enough for pure mathematicians.

 The chaotic sequence shares with the chaotic bouncing of the ball in the earlier example the property of limited predictability, as distinct from the strong predictability of the periodically driven pendulum and of the regular sequence found when A is less than 3. Just as the pendulum, having been disturbed, eventually settles back to its original routine, so the regular sequence, for a given choice of A, settles to the same final number whatever initial value x0 may be chosen. By contrast, when A is large enough to generate chaos, the smallest change in x0 leads eventually to a completely different sequence, and the smallest disturbance to the bouncing ball switches it to a different but equally chaotic pattern. This is illustrated for the number sequence in Figure 14—>, where two sequences are plotted (successive points being joined by straight lines) for A = 3.7 and x0 chosen to be 0.9 and 0.9000009, a difference of one part per million. For the first 35 terms the sequences differ by too little to appear on the graph, but a record of the numbers themselves shows them diverging steadily until by the 40th term the sequences are unrelated. Although the sequence is completely determined by the first term, one cannot predict its behaviour for any considerable number of terms without extremely precise knowledge of the first term. The initial divergence of the two sequences is roughly exponential, each pair of terms being different by an amount greater than that of the preceding pair by a roughly constant factor. Put another way, to predict the sequence in this particular case out to n terms, one must know the value of x0 to better than n/8 places of decimals. If this were the record of a chaotic physical system (e.g., the bouncing ball), the initial state would be determined by measurement with an accuracy of perhaps 1 percent (i.e., two decimal places), and prediction would be valueless beyond 16 terms. Different systems, of course, have different measures of their “horizon of predictability,” but all chaotic systems share the property that every extra place of decimals in one's knowledge of the starting point only pushes the horizon a small extra distance away. In practical terms, the horizon of predictability is an impassable barrier. Even if it is possible to determine the initial conditions with extremely high precision, every physical system is susceptible to random disturbances from outside that grow exponentially in a chaotic situation until they have swamped any initial prediction. It is highly probable that atmospheric movements, governed by well-defined equations, are in a state of chaos. If so, there can be little hope of extending indefinitely the range of weather forecasting except in the most general terms. There are clearly certain features of climate, such as annual cycles of temperature and rainfall, which are exempt from the ravages of chaos. Other large-scale processes may still allow long-range prediction, but the more detail one asks for in a forecast, the sooner will it lose its validity.

      Linear systems for which the response to a force is strictly proportional to the magnitude of the force do not show chaotic behaviour. The pendulum, if not too far from the vertical, is a linear system, as are electrical circuits containing resistors that obey Ohm's law or capacitors and inductors for which voltage and current also are proportional. The analysis of linear systems is a well-established technique that plays an important part in the education of a physicist. It is relatively easy to teach, since the range of behaviour exhibited is small and can be encapsulated in a few general rules. Nonlinear systems, on the other hand, are bewilderingly versatile in their modes of behaviour and are, moreover, very commonly unamenable to elegant mathematical analysis. Until large computers became readily available, the natural history of nonlinear systems was little explored and the extraordinary prevalence of chaos unappreciated. To a considerable degree physicists have been persuaded, in their innocence, that predictability is a characteristic of a well-established theoretical structure; given the equations defining a system, it is only a matter of computation to determine how it will behave. However, once it becomes clear how many systems are sufficiently nonlinear to be considered for chaos, it has to be recognized that prediction may be limited to short stretches set by the horizon of predictability. Full comprehension is not to be achieved by establishing firm fundamentals, important though they are, but must frequently remain a tentative process, a step at a time, with frequent recourse to experiment and observation in the event that prediction and reality have diverged too far.

Additional Reading
Eric M. Rogers, Physics for the Inquiring Mind: The Methods, Nature, and Philosophy of Physical Science (1960), is especially good on the origins of astronomy and mechanics, with minimal mathematics. Of the many general student texts, the Berkeley Physics Course, 5 vol. (1965–71), covering mechanics, electricity and magnetism, waves, quantum physics, and statistical physics; and David Halliday and Robert Resnick, Fundamentals of Physics, 3rd ed. (1988), are recommended. The Feynman Lectures on Physics, 3 vol. (1963–65), by Richard P. Feynman, Robert B. Leighton, and Matthew Sands, instructs students and teachers in the whole range of physical concepts, with characteristically revealing insights. See also Jefferson Hane Weaver (ed.), The World of Physics: A Small Library of the Literature of Physics from Antiquity to the Present, 3 vol. (1987), an anthology covering the history of the major concepts of physics.Expositions of more limited scope, reflecting on general principles for the benefit of nonspecialists, include H. Bondi, Assumption and Myth in Physical Theory (1967); Richard P. Feynman, The Character of Physical Law (1965); and J.M. Ziman, Public Knowledge: An Essay Concerning the Social Dimension of Science (1968). At a more advanced level, M.S. Longair, Theoretical Concepts in Physics (1984); and Peter Galison, How Experiments End (1987), illustrate typical research procedures by means of case studies. Ernst Mach, The Science of Mechanics, 6th ed. (1974; originally published in German, 9th ed., 1933), is both a detailed history and a classic critique of fundamental assumptions. E.T. Whittaker, A History of the Theories of Aether and Electricity, vol. 1, The Classical Theories, rev. and enlarged ed. (1951, reprinted 1973), is equally detailed but less philosophically oriented.Special topics in more recent physics are treated by Albert Einstein, Relativity: The Special & the General Theory (1920; originally published in German, 1917), and many later editions; Wolfgang Rindler, Essential Relativity: Special, General, and Cosmological, rev. 2nd ed. (1979); Steven Weinberg, The Discovery of Subatomic Particles (1983), and The First Three Minutes: A Modern View of the Origin of the Universe, updated ed. (1988); Nathan Spielberg and Bryon D. Anderson, Seven Ideas that Shook the Universe (1985); P.C.W. Davies, The Forces of Nature, 2nd ed. (1986); A. Zee, Fearful Symmetry: The Search for Beauty in Modern Physics (1986); and Tony Hey and Patrick Walters, The Quantum Universe (1987).The principles of catastrophe theory are presented, without mathematical detail, in V.I. Arnold, Catastrophe Theory, 2nd rev. and expanded ed. (1986; originally published in Russian, 2nd ed. enlarged, 1983), which is notably scornful of speculative applications. A full treatment is provided in Tim Poston and Ian Stewart, Catastrophe Theory and Its Applications (1978).Introductions to chaotic processes are found in A.B. Pippard, Response and Stability: An Introduction to the Physical Theory (1985); and James Gleick, Chaos: Making a New Science (1987). More systematic is J.M.T. Thompson and H.B. Stewart, Nonlinear Dynamics and Chaos: Geometrical Methods for Engineers and Scientists (1986). Anthologies of influential early papers are Bai-Lin Hao (comp.), Chaos (1984); and Predrag Cvitanović (comp.), Universality in Chaos (1984).Sir A. Brian Pippard

* * *


Universalium. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Physical science — is an encompassing term for the branches of natural science and science that study non living systems, in contrast to the biological sciences. However, the term physical creates an unintended, somewhat arbitrary distinction, since many branches… …   Wikipedia

  • physical science — physical scientist. 1. any of the natural sciences dealing with inanimate matter or with energy, as physics, chemistry, and astronomy. 2. these sciences collectively. [1835 45] * * * Introduction       the systematic study of the inorganic world …   Universalium

  • physical science — Science Sci ence, n. [F., fr. L. scientia, fr. sciens, entis, p. pr. of scire to know. Cf. {Conscience}, {Conscious}, {Nice}.] 1. Knowledge; knowledge of principles and causes; ascertained truth of facts. [1913 Webster] If we conceive God s sight …   The Collaborative International Dictionary of English

  • Outline of physical science — Physical science is an encompassing term for the branches of natural science and science that study non living systems, in contrast to the life sciences. However, the term physical creates an unintended, somewhat arbitrary distinction, since many …   Wikipedia

  • science — /suy euhns/, n. 1. a branch of knowledge or study dealing with a body of facts or truths systematically arranged and showing the operation of general laws: the mathematical sciences. 2. systematic knowledge of the physical or material world… …   Universalium

  • Science and British philosophy: Boyle and Newton — G.A.J.Rogers INTRODUCTION Achievements in the natural sciences in the period from Nicholas Copernicus (1473– 1543) to the death of Isaac Newton (1642–1727) changed our whole understanding of the nature of the universe and of the ways in which we… …   History of philosophy

  • Science and mathematics from the Renaissance to Descartes — George Molland Early in the nineteenth century John Playfair wrote for the Encyclopaedia Britannica a long article entitled ‘Dissertation; exhibiting a General View of the Progress of Mathematics and Physical Science, since the Revival of Letters …   History of philosophy

  • Science — Sci ence, n. [F., fr. L. scientia, fr. sciens, entis, p. pr. of scire to know. Cf. {Conscience}, {Conscious}, {Nice}.] 1. Knowledge; knowledge of principles and causes; ascertained truth of facts. [1913 Webster] If we conceive God s sight or… …   The Collaborative International Dictionary of English

  • Science education — is the field concerned with sharing science content and process with individuals not traditionally considered part of the scientific community. The target individuals may be children, college students, or adults within the general public. The… …   Wikipedia

  • science, history of — Introduction       the history of science from its beginnings in prehistoric times to the 20th century.       On the simplest level, science is knowledge of the world of nature. There are many regularities in nature that mankind has had to… …   Universalium

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”