nature, philosophy of

nature, philosophy of

Introduction

      the discipline that investigates substantive issues regarding the actual features of nature as a reality. The discussion here is divided into two parts: the philosophy of physics and the philosophy of biology.

      In this discipline, the most fundamental, broad, and seminal features of natural reality as such are explored and assessments are made of their implications for metaphysics, or theory of reality; for one's Weltanschauung, or “world view”; for anthropology, or doctrine of humans; and for ethics, or theory and manner of moral action. These implications are explored on the assumption that the understanding of the natural setting in which life is staged strongly conditions beliefs and attitudes in many fields.

      In its German form, Naturphilosophie, the term is chiefly identified with Friedrich Schelling (Schelling, Friedrich Wilhelm Joseph von) and G.W.F. Hegel (Hegel, Georg Wilhelm Friedrich), early 19th-century German Idealists who opposed it to Logik and to the Phänomenologie des Geistes (“of the spirit or mind”). Employment of the term spread, in due time, beyond its narrower historical context in German Idealism and came to be used, particularly in Roman Catholic parlance, in the sense that it bears in this article (e.g., the philosophies of physics and biology). Despite a notable decline in its usage in more recent years, the term is here employed, in the interest of the clear delineation of topics, as a complement to the philosophy of science, the discipline to which its subject matter has been allocated by recent philosophers. Thus in this work, the article on the philosophy of science is largely restricted to man's approach to nature, and thus to epistemological (theory of knowledge) and methodological issues, while that on the philosophy of nature encompasses the more substantive issues about nature as it is in itself.

Philosophy of physics

Physics as a field of inquiry
Essential features
      Physics is concerned with the simplest inorganic objects and processes in nature and with the measurement and mathematical description of them. Inasmuch as the binding forces of chemistry can now, at least in principle, be reduced to the well-known laws of physics, or calculated from quantum mechanics (the theory that all energy is radiated or absorbed in small unitary packets), chemistry can henceforth be considered as a part of physics in theory if not in practice. Moreover, it has become clear, through the general theory of relativity (which formulates nature's laws as viewed from various accelerating perspectives), that there is an aspect of geometry, too, that can be regarded as a part of physics. The fact that, over a wide range of circumstances, Euclidean, or ordinary uncurved, geometry presents a good approximation to reality is considered today not as a fact stipulated by a necessity of thought, nor a derivative from such a necessity, but as a fact to be established empirically; i.e., by observation. In their application, the laws of Euclidean geometry refer to those experiences that arise with measurements of length and angle and optical sightings as well as with surface and volume measurements. The possibility—already extensively elucidated in antiquity—of deriving geometrical propositions by deduction from a few axioms (axiom), assumed without proof to be correct, had given rise in earlier philosophy to the opinion that the truth of these axioms must and could be guaranteed by a kind of knowledge that is independent of experience. The recognition of such a priori knowledge, however, has been superseded by the modern development of physics. While it is granted that a pure geometry is free to posit any axioms that it pleases, a geometry purporting to describe the real world must have true axioms. Today it is considered that, if Euclidean geometry is true of the world, this truth must be established empirically; the axioms would be true because the conclusions drawn from them correspond to experience. Actually, the world appears Euclidean, however, only when this experience is limited to cases in which the distances are not too great (not much greater than 109 light-years) and in which gravitational (gravitation) fields are not too strong (as they are in the vicinity of a neutron star).

      The possibility of deducing all known laws or regularities as logical inferences from a few axioms, which was discovered in Euclidean geometry, became a model for the construction also of another chapter in the history of physics. The classical physics of Newton, the 17th–18th-century father of modern physics, had employed Euclidean geometry as a foundation and had portrayed the solar system as a system of mass points subject to his mechanical axioms. The laws for falling bodies framed by the 16th–17th-century Italian physicist Galileo are the simplest logical consequences of Newton's axioms, and the laws framed by Johannes Kepler, a 16th–17th-century German astronomer, which precisely describe the motions of the planets, follow from them.

      In addition to the laws of mechanics there are those of the broad sphere of electromagnetic phenomena as summarized in the equations (Maxwell's equations) of James Clerk Maxwell, a 19th-century Scottish physicist, which describe both the electric and magnetic fields and the laws of their mutual changes, equations that may thus be considered as the axioms of electrodynamics. Because they assume the mathematical form of partial differential equations (differential equation)—which express the rates at which differentials (small or infinitesimal distances or quantities) in several dimensions change with respect to their neighbours—electrodynamics is a local-action theory rather than an action-at-a-distance theory as in older formulations modelled after Newton's law of gravitation. The principle of local action states that the variations of electromagnetic magnitudes at a point in space can be influenced only by the electromagnetic conditions in the immediate vicinity of this point. The finite velocity of propagation for electromagnetic disturbances, which follows from this principle, leads on the one hand to the existence of electromagnetic wave events and on the other hand to conformity with the requirements of special relativity (a theory that formulates nature's laws as viewed from the perspectives of various velocities), which demand a maximum finite velocity for signals—the velocity of light in a vacuum.

      The most important division of physics today is one that replaces the traditional distinctions between mechanics, acoustics, and other classical branches of physics with that between macroscopic and microscopic physics, in which the latter investigates the conformity of atoms to law and their reactions in discrete quantum jumps, whereas the former extends from the level of ordinary human experience into astronomy to a total comprehension of the universe, attained through theoretical endeavours in the field of cosmology. Because it is now possible to observe especially bright objects (quasars) that are located perhaps 1010 light-years from the Earth, the possibility of empirically testing cosmological models is beginning to arise. In particular, the application of non-Euclidean (non-Euclidean geometry), or curved, geometries to the cosmos has suggested the conception of a finite, yet boundless, world space (positively curved), in which the maximum possible distance between two points would no longer be much greater than 1010 light-years.

Historical sketch
      In the historical development of physics before the 17th century, geometry was the only field in which extensive advances were made; besides geometry, only the rudiments of statics (the laws of levers, the principle of hydrostatics of the 3rd-century BC scientist Archimedes) were clarified. After Galileo had discovered the laws of falling bodies, Kepler's laws describing the motions of the planets and Newton's reduction of them to a set of dynamical axioms established the science of classical mechanics, to which was annexed the investigation of electromagnetism. These developments culminated in the discovery of induction by Michael Faraday, an English physical scientist, the knowledge of local action by Faraday and Maxwell, and the discovery of electromagnetic waves by a German physicist, Heinrich Hertz. It was not until the 19th century that the law of the conservation of energy (energy, conservation of) was first recognized as a general law of nature, through the work of Julius von Mayer in Germany and James Joule (Joule, James Prescott) in England, and that the concept of entropy (see below Problems at the macrophysical level (nature, philosophy of)) was formulated by Rudolf Clausius (Clausius, Rudolf), a mathematical physicist. At the beginning of the 20th century, the German physicist Max Planck (Planck, Max) introduced the so-called quantum of action, h (Planck's constant) = 6.626 × 10-27 erg-seconds, which, when multiplied by the vibration frequency, symbolized by the Greek letter nu, ν, demarcates a basic packet of energy. Albert Einstein then extended the quantum theory to light. The real existence of atoms was proved by him and other investigators, and the science of microphysics thus arose. The researches of Niels Bohr on the quantum-theoretical significance of atomic spectra paved the way for broader search into the fine details of quantum laws, the final comprehension of which was introduced by Werner Heisenberg in 1924 and then systematically developed by Max Born, Heisenberg, and Pascual Jordan, of Germany, and by P.A.M. Dirac, of England. Moreover, Erwin Schrödinger (Schrödinger, Erwin), an Austrian physicist, pursuing a line of thought pointed out by Einstein and Louis de Broglie, arrived at results that were outwardly quite different from those of Heisenberg et al., but were mathematically equivalent. The quantum mechanics, or wave mechanics, created by these men, which formulated quantum phenomena, were later extended to quantum electrodynamics.

      Einstein's theory of relativity, first formulated in 1905, which was eventually extended from a special to a general formulation, brought about a revolutionary transformation in physics similar to that induced by quantum theory. The Newtonian mechanics of mass points turned out to have been merely an approximation to the more exact relativistic mechanics. The most important consequence of the special theory of relativity, the equivalence of mass (m) and energy (E),

      in which c is the velocity of light, was formulated by Einstein himself.

      After 1916 Einstein strove to extend the theory of relativity to the so-called general theory, a formulation that includes gravitation, which was still being expressed in the form imparted to it by Newton; i.e., that of a theory of action at a distance. Einstein did succeed in the case of gravitation in reducing it to a local-action theory, but, in so doing, he increased the mathematical complexity considerably, as Maxwell, too, had done when he transformed electrodynamics from a theory of action at a distance to a local-action theory.

      The great importance of physics for the technology that depends upon it—which has become a leading factor in the rapidly increasing development in the conditions of human existence—is shown historically in the close connection of decisive technical developments with basic advances in physical knowledge. Einstein's equivalence of mass and energy—to cite but one example—pointed to the atomic nucleus as an energy source that could be opened up through the study of nuclear (nuclear energy) physics. Moreover, the intellectual influence proceeding from physics and affecting the development of modern thought has become especially strong through the deepened grasp of the concept of causality that has followed from quantum theory (see below Modalities of the natural order (nature, philosophy of)).

Basic characteristics and parameters of the natural order
Framework of the natural order
      Earlier mathematicians and particularly Richard Dedekind (Dedekind, Richard), a pre-World War I number theorist, have precisely defined the concept of real numbers (real number), which include both rational numbers, such as 277/931, expressible as ratios of any two whole numbers (integers), and irrational numbers, such as √27, π, or e, which lie between the rationals. By reference to these numbers, the Newtonian concept of space and time, which presupposes a Euclidean geometry of space, may be made precise: the values of the time t, ordered according to the ideas of earlier and later, can be made to correspond to the single real numbers, ordered according to those of smaller and larger. Also, the points on a straight line can be brought into correspondence with the real numbers in such a manner that the location of a point P between two other points P1 and P2 corresponds to a number assigned to P that lies between those assigned to P1 and P2.

      Guided by the wish to find a method that allows the systematic proof of all philosophical truths, René Descartes (Descartes, René), often called the founder of modern philosophy, established in the 17th century the analytic geometry of Euclidean planes. In it the points of a plane can be designated by two numbers x, y, their coordinates. One chooses two orthogonal coordinate axes, x = 0 and y = 0, like those of a graph, and, with any point P, associates its two projections, one upon each coordinate axis, which define the location of P. A curve in the xy plane is then expressed by an equation f (x,y) = 0, shorthand for any equation (“function”) containing x's and y's. In the context of analytic geometry, every theorem of plane Euclidean geometry may be expressed by equations and thus be analytically proved.

      This procedure can also be extended to three-dimensional Euclidean space by introducing three mutually perpendicular axes x,y,z. In this case, there are two different axis systems—either congruent or mirror reflections—analogous to right-handed and left-handed screws.

      The simple space-time relationships of Newtonian physics have been changed in many ways by modern developments. The concept of simultaneity has been made relative by the special theory of relativity; every time measurement t is thus tied to a definite inertial system or moving frame of reference. It is accordingly appropriate to speak not primarily of points in time but of events, which are defined in each case by giving both a point in space and a point in time.

      More specifically, an inertial (inertia) system is a coordinate system that, relative to the fixed stars, is in uniform, straight-line motion (or at rest) with no rotation. In all inertial systems, Newton's principle of inertia, which states that all mass points not acted upon by some force persist in uniform motion with a constant velocity, is valid.

      Moreover, cosmological theories make it probable that space in the real astronomical universe corresponds only approximately to the relationships of Euclidean geometry and that the approximation can be improved by replacing Euclidean space with a space of constant positive curvature. Such a space can be mathematically defined as a three-dimensional hyperspherical “surface”

      in a hypothetical Euclidean space of four dimensions with mutually perpendicular x,y,z, and u coordinate axes.

      The assertion that the foregoing statement has no operationally comprehensible content—i.e., no content provable by performable measurements—is designated conventionalism, a view that is based on a remark by a French mathematician, Henri Poincaré (Poincaré, Henri), who was also a philosopher of science, that a fixed non-Euclidean space can be mapped point by point on a Euclidean space so that both are suitable for the description of the astronomical reality. The range of this remark is limited, however, in that this mapping, though it can indeed carry over points into points, can in no way carry over straight lines into straight lines. Hence, many philosophers of science have held that, as long as astronomical light rays are held to be straight lines, the question of a possible curvature of space (i.e., a deviation from Euclidean conditions) will by no means be solved by some arbitrary convention; that it signifies, instead, a problem to be solved empirically. If the universe in fact has a positive constant curvature, then every straight line has a length that is only finite, and its points no longer correspond, as in the Euclidean case, to the set of all real numbers.

      In a very definite manner, cosmological facts have further indicated that time is by no means unlimited both forward and backward. Rather, it seems that time as such had a beginning about 1010 to 2 × 1010 years ago; thus, with an explosive (big-bang model) beginning, the cosmic development began as an expansion.

      The foregoing discussion has considered only the replacement of Euclidean spatial concepts by an elementary non-Euclidean geometry corresponding to a space with a constant curvature. According to Einstein (Einstein, Albert), however, the fundamental idea of a still more generalized Riemannian geometry, so-called after Bernhard Riemann, a geometer and function theorist, must be brought into play in order to produce a local-action theory of gravitation.

      Riemannian geometry is a further development of the theory of surfaces created by the 18th- and 19th-century German mathematician and astronomer Carl Friedrich Gauss (Gauss, Carl Friedrich), often called the founder of modern mathematics, a theory that aimed to investigate the curved surfaces of three-dimensional (Euclidean) spaces with exclusive regard to their own inner dimensions and no consideration of their being imbedded in a three-dimensional space.

      Gauss thought that the points on such a surface could be specified by reference to two arbitrary coordinates u and v defined with the help of two single-parameter families of curves, u = constant and v = constant. The square of the infinitesimal distance between two adjacent points of the surface, ds2, is then a quadratic form of the differentials du and dv, belonging to the pair of points, namely,

      in which the coefficients gk1 are functions of position. One can then calculate the curvature corresponding to the location of the pair of points according to a prescription given by Gauss, a curvature that measures the deviation from Euclidean plane behaviour that exists at this point. The curvature is a definite function of the gk1 and their first derivatives.

      Riemann extended Gauss's considerations to the case of a three-dimensional space (non-Euclidean geometry) that can have different curvature properties from place to place (expressed by several functions of position that are collectively called the curvature tensor); and Einstein generalized these ideas still further, applying them to the four-dimensional space–time continuum, and thereby attained a reduction of the Newtonian action-at-a-distance theory of gravitation to a local-action theory.

Contents of the natural order
      Among the most basic constituents of the physical world are symmetries, fields, matter, and action.

      Symmetry is one of the chief concepts of modern mathematics, which combines the different symmetries belonging to an object or a concept into groups (group) of relevant symmetries. The a priori investigation of the totality of possible groups, defined with respect to some operation (such as multiplication), comprises a division of modern mathematics called group theory.

      Three-dimensional Euclidean space displays several important symmetry properties. It is homogeneous; i.e., arbitrary shifts in the origin or zero point of the coordinate system produce no change in the analytic expression of the geometrical laws. It is also isotropic; that is, rotations of the coordinate system leave all geometrical laws in effect. Further, it is symmetric with regard to mirror reflections. It is tempting to suppose that these symmetry properties of space are also valid for the physical processes that occur in space, and this is indeed true over a wide range of cases, but not in all cases (for exceptions, see below Problems at the quantum level (nature, philosophy of)).

      That Newtonian mechanics and Maxwellian electrodynamics display in fact all of the symmetries of Euclidean space is revealed by the fact that they can be formulated in the language of vector analysis. Passing over the more familiar Newtonian mechanics, a few points about Maxwell's theory may be mentioned. This theory can be made to satisfy the requirements of operational thinking by ascribing to the electric and magnetic field strengths the significance of measurable physical realities, which makes it unnecessary to interpret them as states of a mysterious, hypothetical substance or ether, for which, in any case, the special theory of relativity (with the equivalence of all inertial systems) has no place.

      Mathematically interpreted, a vector a represents a quantity with both magnitude and direction, which preserves its length or value and its direction when displaced. The vector field—i.e., the association of a vector with every point in space (e.g., electric field strength, or electric current density)—and the line integral (or summation) of a vector field V along a curve K leading from a point P to a point P ′ are basic concepts in vector analysis. To obtain the line integral, the curve K is divided into infinitesimal elements ds, the scalar (numerical or nonvector) product of ds with the value of V at that point is taken, and the results are summed with an integration.

      A small surface area envisioned with a given sense of rotation around its boundary curve can also be described by a vector. In this instance, the vector, dF is perpendicular to the surface and forms, with the sense of rotation about the boundary, a right-handed system. Its magnitude is the area of the surface. The flux of the vector field V through the surface dF is called the scalar product V · dF.

      If V has the property that the line integral along every closed curve K is equal to zero, then V is said to be irrotational. This property is equivalent to the requirement that the vector field be a so-called gradient field; i.e., that there exist a scalar field quantity W with the property that the difference in the value of W at two points P and P ′ is equal to the line integral of the vector field V from P to P ′ (along any arbitrary curve K). If V is, for example, an electrostatic (charged) field, the significance of being irrotational is that one can gain no mechanical work in leading a small test charge around any closed curve; the work involved is equal to zero. For an unclosed curve K, however, the movement of the test charge yields an amount of mechanical work that is proportional to the potential difference between the endpoints of the curve. The components of the gradient of W, expressed in partial derivatives ∂, are

      If the vector field is not irrotational, there can then be constructed from it an adjunct rotational field, called curl V, by considering a small (infinitesimal) surface area dF located at a point P and forming the line integral of V along the boundary curve of dF. Then, when this line integral is divided by the magnitude of the surface area, the component of the curl V parallel to the vector dF is obtained.

      On the other hand, the flux of a vector field V out of a closed surface can be formed by integration. If this flux is always zero (for every choice of a closed surface), V is called source-free. Otherwise, there is a so-called divergence of V at a point P, which is defined as follows: one divides the net flux of V out of a small surface that surrounds P by the volume enclosed by the surface. The limit of this quotient for infinitesimally small surfaces is called the divergence of V at P or the source field div V.

      The formulation of the basic laws of electrodynamics given by Maxwell is called the Maxwell equations (Maxwell's equations). These equations contain, for example, the statement that, in a vacuum, the source field of the electric field strength is proportional to the spatial electric charge density, symbolized by the Greek letter rho, ρ, and that the magnetic field strength is source-free (divergence equal to zero). Thus, magnetic monopoles having no correlate of opposite sign do not exist. Remembering that every source-free vector field may be expressed mathematically as a rotation field (and vice versa), it is possible to derive the magnetic field strength H as a rotation field from a vector field A, which is usually called the vector potential of H

      The fundamental law of the conservation of charge (charge conservation) results from Maxwell's equations in the form of the continuity equation

      in which ρ is the time derivative of the charge density, and the vector field i is the electric current density.

      In the case of a vacuum, the Maxwell equation that expresses Faraday's law of induction takes the form of a proportionality between the rotation field of the electric field strength and the time derivative of the magnetic field strength:

      It is a significant fact that Maxwell's theory leads to a localization of energy, which in electromagnetic fields is propagated somewhat in the manner of a substance, with a density that, for the vacuum case, is

      There remains also the unsolved problem of clarifying the relation of gravitation to quantum theory, which is much aggravated by the fact that gravitational energy allows of no similar localization.

      In both mechanics and electrodynamics, the fundamental equations have such a form that they can be understood as the conditions for a variational or an extremal principle: that, through the fulfillment of these conditions, a certain integral receives an extreme value. This integral, which has the dimensions of actioni.e., of energy times time—is one of the most fundamental quantities of nature. Although the concept of action is less obvious to man's physical intuition than that of energy, it is of even greater significance, as it appears also in connection with the quantum laws. For the basic constant of all of quantum physics, which always occurs in the laws of this domain, is likewise of this dimension: namely, Planck's (Planck's constant) quantum of action

Modalities of the natural order
      In a purely phenomenalist (phenomenalism) theory of matter—i.e., a theory that does not go into the details of atomic physics but considers matter only in a first approximation as a spatially extended continuum—numerous material properties are ascribed to every type of matter, properties such as density, electrical conductivity, magnetizability, dielectric constant, thermal conductivity, and specific heat. To be complete, a theory must provide a means of deriving all of these material properties theoretically from the laws of atomic physics.

      The hiatus-free causality (causation) envisioned throughout the science of physics before the rise of quantum theory cannot be separated conceptually from the far-reaching assumption that all physical processes are continuous. It had been supposed that continuous changes in antecedent causal processes would issue in continuous changes in the sequence of processes that are causally dependent upon them. quantum physics, however, has expressly breached the old philosophical axiom that natura non facit saltūs (“nature does not make leaps”) and has introduced a granularity not only in the matter filling space but also in the finest processes of nature. It is therefore only logical that, with respect to causality, the quantum theory would arrive at new and modified ideas as well. Renouncing unbroken causality, it speaks only of a probability (probability and statistics) that is statistical and a predetermination for the discrete saltatory events of which physical processes consist—a view that must now, in spite of Einstein, be regarded as irrevocable.

      The special theory of relativity demands that the fundamental validity of the local-action principle be acknowledged: all actions have only finite velocities of propagation, which cannot exceed the velocity of light. Thus, in relativistic cosmology it is quite possible that two partial regions of the total spatial manifold may exist between which no causal interaction can occur: causal influences could then assert themselves only inside the so-called interdependent regions in the space–time manifold. These remarks also apply to the quantum theory, in which, however, instead of a causal dependence of physical processes upon each other, there is only an induction of statistical probabilities for possible quantum transitions.

Levels of the natural order
      Moving in quite different directions, the theory of relativity on the one hand and the quantum theory on the other have diverged from the earlier ideas of classical physics, which were considered unalterable. There are some physical problems, however, that can be thought through only by appealing to both the relativistic and the quantum-theoretical modifications. A so-called joint relativistic and quantum-mechanical theory suitable for such problems is quantum electrodynamics, the development of which, however, is not yet complete. Its development was greatly hindered at first by certain mathematical difficulties (so-called divergences), which it later became possible to mitigate by renormalizationi.e., by a technique of correcting the calculated results. The more generally conceived quantum theory of wave fields finds a broad area of possible application in the physics of the different kinds of elementary, though short-lived, particles (subatomic particle) produced by the huge high-energy accelerators. In its final form, the theory of elementary particles should not only formulate, in general, the laws valid for all known elementary particles but should also allow a deductive derivation for all possible kinds of elementary particles—analogous to the derivations of elements in the periodic table. Heisenberg endeavoured to set up this far-reaching problem, which has been called the world formula, for a solution. Imposing mathematical difficulties, however, have arisen in the attempt to clarify its consequences for a quantitative comparison with experience, and considerable further work may still be required.

Special problems in the philosophy of physics
Problems at the formal level
      Euclidean space, in contrast to imaginable spatial structures that deviate from it, is distinguished by the simplicity of the topological properties (those preserved through rubberlike stretching and compressing, but without any tearing) that arise from its unusually simple continuity relationships. One may ask, then, whether the empirical knowledge of modern physics gives any cause to consider deviations from the topological relations of Euclidean space. The American physicist John A. Wheeler (Wheeler, John Archibald), author of a new theory of physics called geometrodynamics, has speculated about this question. In particular, he has pointed to the possibility of so-called worm holes in space, analogous to the way in which the cylindrical surface of a smooth tree trunk is changed topologically if a worm bores a hole into the trunk and emerges from it again elsewhere: the surface of the trunk has thus obtained a “handle.” Similarly, one can envision certain handles being added to three-dimensional Euclidean space. Whether this hypothesis can be fruitful for the theory of elementary particles is yet to be determined. From the methodological and epistemological standpoints, it is obvious that a geometrical structure is here being assumed, the measurement of which is fundamentally hindered by the lack of rulers with calibrations smaller than the structure itself. Presumably, the practical possibility of appealing to such topological modifications of the ordinary notion of space is to be found in astrophysics rather than in elementary particle physics. Viktor A. Ambartsumian (Ambartsumian, Viktor Amazaspovich), an Armenian-born astrophysicist, is convinced that the processes involved in the origins of galaxies are connected with explosions in which the matter of new stellar systems arises from prestellar material; it has been found tempting to suppose that this prestellar material exists in regions with unusual topological properties.

      The basic idea of the special theory of relativity can also be understood as a statement about the symmetry properties of the four-dimensional space–time (space-time) manifold. The special principle of relativity states, in fact, that the same physical laws are valid in all of the various inertial coordinate systems—in particular the law that the velocity of light in a vacuum always has the value c. This equivalence of the space–time coordinates x,y,z,t with other coordinates x′, y′, z′, t′ that are linear, homogeneous functions of the unprimed coordinates can be expressed by the equation

      In this formulation, the isotropy of space—its sameness in all directions—appears as a special case of a more comprehensive symmetry property of the space–time manifold. When t = t′, the special case of a purely spatial rotation of coordinates is obtained; and in the general case, in which the primed coordinates are moving with velocity u with respect to the unprimed, the famous Lorentz transformations are obtained, which, to adjust to the finiteness of c, add a factor, symbolized by the Greek letter gamma,

to the ordinary Galilean transformation,
thus yielding

      The group of symmetries of the four-dimensional space–time manifold thus produced is called Poincaré group.

Problems at the quantum level
      Problems of particle theory, complementarity, and symmetry have arisen in studies at the quantum level.

      Whereas the atomic nuclei beyond hydrogen-1 (the proton) are compounded structures, consisting of neutrons and protons, modern physics also deals with numerous elementary particles (subatomic particle)—neutrinos; π (pi), μ (mu), and K mesons; hyperons; etc.—that are thought of as uncompounded. The elementary particles of each particular kind show no individual differences. Each elementary particle has a corresponding antiparticle, which, for charged particles, always carries a charge of opposite sign. (The γ [photon], π0, and Z0 particles are understood to be their own antiparticles.) Whether Heisenberg's (Heisenberg, Werner) world formula can provide a complete framework for all possible kinds of elementary particles is undecided.

      Every type of elementary particle has a definite value for its spin, either integral (e.g., photons) or half-integral (e.g., electrons, protons, neutrinos). Particles with half-integral spin obey Fermi–Dirac statistics (Fermi-Dirac statistics); those with integral spin obey Bose–Einstein statistics (Bose-Einstein statistics), which differ in form as u/(1 + u) differs from u/(1 - u)—u being any function. The conformity to law that underlies the Fermi–Dirac statistics for electrons was first recognized by Wolfgang Pauli (Pauli, Wolfgang) and formulated as the Pauli exclusion principle, which played a decisive role in settling upon the shell structure in the periodic system of the elements.

      The basic duality (wave-particle duality) of waves and corpuscles is of universal significance for all kinds of elementary particles, even for composite particles in those experiments that cannot lead to a breakup of the particles into their component parts.

      An electron (and analogously any other elementary particle or even, for example, an alpha particle) can appear (uncertainty principle) just as well in the form of a wave as in that of a localized corpuscle. In an idealized thought experiment, one can imagine that the position of an electron can be ascertained with a gamma-ray microscope. If the electron is described in terms of wave processes in the sense of Schrödinger's wave (Schrödinger equation) mechanics, a very sharply concentrated wave packet appears at the stated position. In an investigation of this packet by Fourier analysis—a technique that analyzes a function into its sinusoidal components—wave components of quite different wavelengths occur; thus an electron in this condition has no definite value for its de Broglie wavelength (de Broglie wave) and consequently none also for its translational momentum. Then, as stated in the so-called de Broglie relation,

      there is for an electron moving free from impinging forces a corresponding wavelength, symbolized by the Greek letter lambda, λ, that is inversely proportional to its momentum mv. And conversely, an electron moving inertially with a definite momentum (which in the limiting case of small velocities is equal to the product of the mass and the velocity vector) has no definite position. If an electron that is moving inertially (especially an electron at rest) is constrained by the use of a gamma-ray microscope to “make up its mind,” as it were, on a location, then the probability of its appearance at a point in space is the same for all locations. More precisely stated, the probability of the appearance of the electron in a definite volume is proportional to the magnitude of this volume.

      In particular, it will be helpful to consider an electron moving in the x-direction and to suppose that it has a wave amplitude that depends only upon x, an electron in which the most representative wavelengths are confined to a narrow interval while the amplitudes that are discernibly different from zero are likewise confined to a certain interval Δx. If, on the other hand, Δp is the range of discernible momentum values—computed from the discernible wavelengths that represent them according to the de Broglie relation (12—>)—then the product of the uncertainties Δx and Δp cannot be smaller than Planck's fundamental quantum of action h (Planck's constant). This statement comprises the famous Heisenberg uncertainty relation, which expresses the “complementarity” of position and momentum—as Niels Bohr characterized it.

      If one assumes, as above, that all physically possible states of an electron can be represented by Schrödinger's wave mechanics, then the complementarity (complementarity principle) of the position coordinate x and its corresponding conjugate momentum px is a simple mathematical fact. When one thinks primarily of physical-measurement experiments, it should be emphasized that stringent limitations are imposed on the simultaneous measurement of the position and momentum of a particle which, according to the uncertainty principle, make it impossible to measure simultaneously both of these complementary quantities with unlimited precision. In an experiment that measures its position, the electron is forced into a sharper localization; and its particle nature is evident. By contrast, in an experiment that measures its momentum, an interference experiment is involved; the electron must be able to display a certain wavelength, which requires an adequately extended region in space for its reacting. These two complementary and opposing demands can be brought into harmony only in the sense of a compromise; and the Heisenberg uncertainty relation formulates the best possible compromise.

      Thus, it becomes at the same time clear that the state of the electron given in the wave-mechanical description before carrying out a new measurement experiment can establish only a statistical prediction for the result. The probability density for the appearance of an electron at a point in space is given by the square of the absolute value of the (complex) Schrödinger wave amplitude; for a definite result in measuring a wavelength or a momentum, the square of the absolute value of the (complex) Fourier coefficient belonging to it provides the standard. The general statistical transformation theory of quantum mechanics (as developed by Dirac and Jordan) gives a complete review of the measurable physical quantities for a microscopic mass point (or a system of such points). According to this theory, two different measurable quantities A and B can be simultaneously determined with unlimited precision only if the operators or matrices that describe A and B commute—i.e., if AB = BA.

      A transformation in which the nucleus emits an electron and a neutrino is called beta decay, an example of nuclear radioactivity. The forces that thus come to light are those of the so-called weak interactions (weak force). It has been experimentally determined that for these forces the symmetry associated with reflections in a mirror does not hold. At least in certain circumstances, however, a remnant of this symmetry continues to hold, in which the so-called CPT (for the initials in charge/parity/time) theorem applies. This theorem states that basic physical laws remain of unaltered validity when a reflection of the space coordinates as in a mirror is combined with an interchange of positive and negative charge (which is largely synonymous with the interchange of particles and antiparticles) and with a reversal of the direction of time. Whether or not this symmetry law is valid without exception is by no means fully clarified at present.

Problems at the macrophysical level
      Proceeding from the properties of atoms and molecules that are described in terms of quantum theory, a theory of macrophysical substance aggregates has been built using statistical mechanics. The theories of heat, of gases, and of solid-state aggregates (crystal lattices) have been extensively clarified. Only the liquid state still poses certain unsolved problems for the statistical theory of heat.

      In any case, Newtonian mechanics may be derived as a macroscopic consequence of the laws of the mechanics of atoms, and its validity for the motions of astronomical bodies presents no problem. It is not so simple to prove, however, that the statements of Newtonian mechanics for rotating bodies (i.e., the mechanical laws of centrifugal force and the Coriolis force) may be established from Einstein's general theory of relativity. Ernst Mach (Mach, Ernst), a physicist and philosopher of science whose train of thought has substantially fertilized the modern development of physics from the point of view of the theory of knowledge, raised objections against Newton's idea that centrifugal force is a consequence of the absolute rotation of a body; he asserted instead that the rotation of a body relative to the very distant giant mass of the universe was the true cause of centrifugal force. This idea, often referred to as Mach's principle, has been corroborated, though in a different form, within the conceptual framework of Einstein's general theory. An irrotational coordinate system—specifically, a system not rotating with respect to the fixed stars (or, better, to the spiral nebulae)—is distinguished from a rotating system by the difference in the metric field for the two cases (i.e., in the properties of their respective space–times as expressed by the equation—(3—>) above—for the interval between two events). It is true that the local metric field (in the vicinity of the solar system) is influenced by the distant masses of the universe, but of course only in the sense of a local-action principle and therefore in no way such that the metric in the solar system is directly given as a function of these distant masses and of their motions.

      The question of the precise circumstances in which Mach's principle can still be defended on the basis of Einstein's theory is somewhat complicated and thus remains obscure. In any case, it is certain that a deduction of this principle from Einstein's theory can only be given in conjunction with a complete solution of the cosmological problem; i.e., of the problem of what are the overall geometric and dynamic properties of the universe considered in its totality. The remaining problems involved in justifying the application of classical Newtonian mechanics in astronomy by means of Einstein's theory contain, however, no additional fundamental difficulties.

      There is one more influence of cosmological relationships upon macroscopic physics, which arises in connection with thermodynamics. The existence of irreversible processes in thermodynamics indicates a distinction between the positive and negative directions in time. As Clausius recognized in the 19th century, this irreversibility reflects a quantity, first defined by him, called entropy, which measures the degree of randomness evolving from all physical processes by which their energies tend to degrade into heat. Entropy can only increase in the positive direction of time. In fact, the increase in entropy during a process is a measure of the irreversibility of that process. In contrast, it is true of the quantum theory of the atom that the positive and negative directions in time are equally justifiable (in the sense of the principle of CPT symmetry). Consequently, it is difficult to understand how statistical mechanics can make possible a thermodynamics in which the entropy grows with time.

      It is true that there are fluctuating thermodynamic phenomena, even in a system in overall thermodynamic equilibrium—and here theory and experiment agree. Thus, the states that arise within any small partial volume of the system may be not only those that are thermodynamically most probable but also transitory deviations from the most probable state. The mention of these fluctuations, however, does not help to remove the above paradox.

      Most physicists now hold that, until recently, this problem was treated erroneously in the usual textbook presentations. In the statistical theory of heat, entropy was regarded as proportional to the logarithm of the thermodynamic probability (probability and statistics), and students came to regard it as a necessity of thought that nature progresses from states of lower probability to states of higher probability. In truth, however, the increase of entropy is a real physical property of the positive direction in time. Nonetheless, it was supposed that shaking a vessel containing red and white balls (or even grains of sand) in an originally ordered condition with the two colours neatly separated must result in a condition of extreme intermixing of balls. This result does not correspond, however, to some necessity of thought but to an empirical property of the real universe in which men live and experiment.

      This interpretation, which was held for a long time and has only quite recently been recognized as erroneous, was allegedly supported by a famous mathematical theorem of Boltzmann, which seemed to show that, in an ideal gas for which the entropy—measured by its number of particles and its total energy—was not yet at its maximum value, the entropy must increase. If collisions of gas molecules are characterized by velocity vectors that are mechanically allowable (both before and after the collision), and if these vectors must satisfy both energy and momentum conservation, then what Boltzmann actually proved is that the entropy increase follows only when a correct count of the collision rate is made, according to which every kind of collision of gas molecules has a frequency of occurrence proportional to the product of the number of collision pairs that were present and the velocities that existed before the impact.

      As one can subsequently see, the positive direction in time is already marked out by the collision rate count in a manner that no longer corresponds to the CPT principle. Although it is in fact possible to reason out the continuous increase in entropy on this basis, the paradox is not overcome. The question then remains of how it is physically justifiable—i.e., how it can correspond to reality—to regard this principle of collision rates as valid even though it fundamentally contradicts the CPT principle.

      An answer to this paradox can now be given, thanks to the insight of modern theoreticians—among them Hermann Bondi (Bondi, Sir Hermann), a mathematician and cosmologist—who have shown that the entropy principle must be understood in the sense that in the universe (Cosmos) as a whole, one definite time direction is singled out, namely, the one for which the universe expands. The thermodynamic distinction of a positive direction in time—with increasing entropy on the macroscopic level and with collision rate counts on the microscopic level—results from an expansion of the universe. Surprisingly, the Hubble expansion of the system of all the galaxies—so named after Edwin Hubble, an extragalactic astronomer—thus displays physical effects right down to the level of everyday physics; specifically, when two bodies at different temperatures are brought into thermal contact, the temperature equalization that results is an irreversible process corresponding to an asymmetry of the positive and negative directions in time that depends upon the expansion of the universe (cosmology).

Problems at the cosmological level
      A mathematical discovery by Alexander Friedmann has become of great significance for the mathematical derivation of cosmological models from Einstein's general theory of relativity. According to Friedmann, if the average mass density is constant throughout space, the gravitational field equations can be satisfied by a metric that embraces a three-dimensional space of constant curvature together with a time coordinate t such that the radius of curvature R(t) is a definite function of time; and these cosmologies turn out differently depending upon whether the curvature of space is positive, negative, or zero. Among the models of the universe that are mathematically allowable are models in which the time coordinate may run through all values from zero to infinity, models in which the time is limited to a finite interval, and models in which it may run from minus infinity to plus infinity.

      For a time, many specialists working in the field of cosmology found the so-called steady-state theory, first projected by an astronomer, Sir Fred Hoyle (Hoyle, Sir Fred), especially convincing. In a modified version, this theory was adapted to the Friedmann model by Bondi. By adopting the so-called perfect cosmological principle, which holds that the broadest features of the universe are the same at all times as well as at all places, the theory then satisfied the unusually high symmetry or homogeneity requirements not only of a three-dimensional space with constant time but also of the entire space-time manifold. This high-degree homogeneity was so convincing to many authors that, in deference to it, a fundamental deviation from Einstein's field equations was tolerated: Bondi and Hoyle supposed that a small but constant creation of hydrogen occurs in the intergalactic vacuum. This hypothesis was introduced in order to achieve, in spite of the Hubble expansion of space, a mass density that remained constant in the universe.

      This theory, which in spite of its deviation from Einstein's field equations certainly advocates an allowable hypothesis worthy of consideration, no longer seems tenable, however, because of the discovery of background radiation with a present temperature of 3° Kelvin, which is interpreted as a remnant of an original “big-bang (big-bang model)” beginning of the universe. It thus appears that it is no longer possible to uphold the steady-state theory or the perfect cosmological principle upon which it is based. Instead, one must favour either a Friedmann model, which has a beginning, from which it expands monotonically and without limit; or a Lemaître model, in which a quantity lambda, λ, called the cosmical constant, arises that is, mathematically, a constant of integration, and physically, a force of cosmic repulsion that partially neutralizes that of gravitational attraction, and which lends a curvature to space even in its empty regions. For both of these models the time coordinate increases without limit from some initial value, which would naturally be called zero. For the beginning of time, one thinks, moreover, of a singularity R(0) = 0 and thus of a space that at the null point of time is still a mass point. Cyclical models that alternately expand and contract in an endless sequence have also been discussed.

      The empirical cosmological data, some of which, indeed, are more estimated than ascertained, seem to suggest that, in the present-day universe, the positive energy corresponding to the total rest mass of all the material existing in the universe may be exactly equal to the negative gravitational (gravitation) energy existing in the universe; thus, the total energy would then be equal to zero. This interesting singularity, however, needs further support. At one time, Dirac advocated the speculation that the total mass of the universe is not constant in time but is increasing—at a rate somewhat slower, however, than that in the steady-state theory. Ambartsumian (Ambartsumian, Viktor Amazaspovich)'s notion concerning prestellar material, which was mentioned above (see Problems at the formal level (nature, philosophy of)), could perhaps be considered support for this idea. Many further discussions have followed another conjecture by Dirac, according to which the gravitational constant G should be liable to change in the course of cosmic development. This constant would thus have to be considered a scalar field quantity, which in a Friedmann universe is approximately independent of the three space variables but dependent on the time variable. In spite of extensive theoretical deliberations on this theme, no decision has yet been reached.

      The way has been opened for some fundamental conjectures on certain emerging themes by the fact that the product of the mean mass density in the universe and the gravitational constant has the same order of magnitude as the square of the reciprocal of the radius of curvature of the universe. The aforementioned relation between the mass and gravitational energy in the universe presents a different expression for this ratio. The total mass of the universe divided by the proton mass probably has approximately the order of magnitude 1080, according to present cosmological notions. The order of magnitude of the radius of curvature of the universe is approximately 1040, when expressed as a multiple of an elementary length of which the order of magnitude is approximately that of the nuclear radius. Whether it is justifiable to presume that there is here a functional dependence—i.e., a proportionality of M to R squared—is a question for the present still undecidable. The speculative attempt of Dirac to find an answer, however, is still—at least provisionally—judged with skepticism by the majority of physicists.

Pascual Jordan

Philosophy of biology
      The sharp increase in understanding of biological (biology) processes that has occurred since the mid-20th century has stimulated philosophical interest in biology to an extent unprecedented since the first formulation of evolutionary theory in the 1850s. Most of the problems of contemporary philosophy of biology are traditional questions now being investigated afresh in the light of scientific advances, particularly in molecular genetics, and new standards of philosophical rigour.

      This section discusses the chief topics in the philosophy of biology as well as recent developments in ancillary and related fields. For detailed treatment of ethical issues relating to the biological sciences, the natural environment, and health care, see bioethics. For discussion of philosophical criticisms of evolutionary theory inspired by religion, see evolution.

History
Teleology from Aristotle to Kant
 The philosophy of biology, like all of Western philosophy, began with the ancient Greeks. Although Plato (c. 428–c. 348 BC) was little interested in the subject, his student Aristotle (384–322), who for a time was a practicing biologist, had much to say about it. From a historical perspective, his most important contributions were his observations that biological organisms can be arranged in a hierarchy based on their structural complexity—an idea that later became the basis of the Great Chain of Being—and that organisms of different species nevertheless display certain systematic similarities, now understood to be indicative of a common evolutionary ancestry (see homology). More significant philosophically was Aristotle's view of causation, and particularly his identification of the notion of final causality, or causality with reference to some purpose, function, or goal (see teleology). Although it is not clear whether Aristotle thought of final causality as pertaining only to the domain of the living, it is certainly true that he considered it essential for understanding or explaining the nature of biological organisms. One cannot fully understand why the human eye or heart has the structure it does without taking into account the function the organ performs.

      The notion of final causality was taken for granted by most philosophers from the Hellenistic Age through the end of the Middle Ages. Indeed, philosophers and theologians in the medieval and early modern periods adopted it as the basis of an argument for the existence of God—the teleological argument (Christianity), also known as the argument from design, which was developed in sophisticated ways in the 19th and 20th centuries (see intelligent design). During the scientific revolution of the 17th century, however, final causes came to be regarded as unnecessary and useless in scientific explanation; the new mechanistic (mechanism) philosophy had no need for them. The English philosopher and scientist Francis Bacon (Bacon, Francis, Viscount Saint Alban (or Albans), Baron of Verulam) (1561–1626) likened them to the Vestal Virgins—decorative but sterile.

      Despite these criticisms, the notion of final causality persisted in biology, leading many philosophers to think that, in this respect at least, the biological sciences would never be the same as the physical sciences. Some, like the German Enlightenment philosopher Immanuel Kant (Kant, Immanuel) (1724–1804), regarded biology's reliance on final causality as an indication of its inherent inferiority to sciences like physics. Others, like the British historian and philosopher of science William Whewell (Whewell, William) (1794–1864), took it as demonstrating simply that different sciences are different and thus that a form of explanation that is appropriate in one field might not be appropriate in another.

vitalism and positivism
 In the late 19th century, the question of the supposed inherent differences between the biological and the physical sciences took on new importance. Reaching back to the ideas of Aristotle, but also relying on more-recent theories promoted by the Count de Buffon (Buffon, Georges-Louis Leclerc, count de) (1707–88) and others, several philosophers and biologists began to argue that living organisms are distinguished from inert matter by their possession of a “life force” that animates them and propels their evolution into higher forms. The notion of an entelechy—a term used by Aristotle and adopted by the German biologist Hans Driesch (Driesch, Hans Adolf Eduard) (1867–1941)—or élan vital—introduced by the French philosopher Henri Bergson (Bergson, Henri) (1859–1941)—was widely accepted and became popular even outside academic circles. Ultimately, however, it fell out of favour, because it proved to have little direct scientific application. The difficulty was not that life force was not observable in the world (at least indirectly) but that it did not lead to new predictions or facilitate unified explanations of phenomena formerly thought to be unrelated, as all truly important scientific concepts do.

      The decline of vitalism, as the resort to such forces came to be known, had two important results. Some philosophers tried to find a way of preserving the autonomy of the biological sciences without resort to special forces or entities. Such theories, referred to as “holism” or “organicism,” attracted the attention of the British philosophers Alfred North Whitehead (Whitehead, Alfred North) (1861–1947) and Samuel Alexander (Alexander, Samuel) (1859–1938), who thought that the very order or structure of organisms distinguished them from nonliving things. Others turned to early 20th-century advances in logic and mathematics in an attempt to transform biology into something parallel to, if not actually a part of, the physical sciences. The most enthusiastic proponent of this approach, the British biologist and logician Joseph Woodger (1894–1981), attempted to formalize the principles of biology—to derive them by deduction from a limited number of basic axioms and primitive terms—using the logical apparatus of the Principia Mathematica (1910–13) by Whitehead and Bertrand Russell (Russell, Bertrand) (1872–1970).

      In the first half of the 20th century, Anglo-American philosophy ( analytic philosophy) was dominated by a school of scientific empiricism known as Logical Positivism. Its leading figures—Rudolf Carnap (Carnap, Rudolf) (1891–1970), Carl Hempel (Hempel, Carl Gustav) (1905–97), Ernest Nagel (Nagel, Ernest) (1901–85), R.B. Braithwaite (Braithwaite, R.B.) (1900–90), and Karl Popper (Popper, Sir Karl) (1902–94)—argued that genuine scientific theories, such as Newtonian astronomy, are hypothetico-deductive (hypothetico-deductive method), with theoretical entities occupying the initial hypotheses and natural laws the ultimate deductions or theorems. For the most part these philosophers were not particularly interested in the biological sciences. Their general assumption was that, insofar as biology is like physics, it is good science, and insofar as it is not like physics, it ought to be. The best one can say of modern biology, in their view, is that it is immature; the worst one can say is that it is simply second-rate.

Twentieth-century resurgence
 This uncharitable perspective was soon undermined, however, by at least three important developments. First, in the 1960s the biological sciences became philosophically much more complex and interesting, as the stunning breakthroughs in molecular biology of the previous decade—particularly the discovery in 1953 of the nature of the DNA molecule—were starting to bear fruit. For example, one could now study variation between or within populations quantitatively, rather than simply by estimation or guesswork. At the same time, there were major new developments and discoveries in the theory of evolution, especially as it applied to the study of social behaviour. It was therefore no longer possible for philosophers to dismiss biology as an inferior science merely because it did not resemble physics.

      Second, the conception of science advocated by logical positivists came under attack. Drawing on the work of the philosopher and historian of science Thomas Kuhn (Kuhn, Thomas S.) (1922–96), critics argued that the picture of scientific theories as structurally uniform and logically self-contained was ahistorical and unrealistic. Accordingly, as philosophers broadened their appreciation of scientific-theory construction in the real world, they became increasingly interested in biology as an example of a science that did not fit the old logical-positivist paradigm.

      Third, in the early 1960s the history of science (science, history of) began to emerge as a distinct academic discipline. Its rapid growth attracted the attention of philosophers of science and helped to strengthen the new consensus among them that an appreciation of the history of science is necessary for a proper philosophical understanding of the nature of science and scientific theorizing. Significant new work by historians of science on the development of evolutionary theory was taken up by philosophers for use in the explication of the nature of science as it exists through time.

      In this newly receptive intellectual climate, research in the philosophy of biology proceeded rapidly, and the influence and prestige of the discipline grew apace. New professional organizations and journals were established, and the area soon became one of the most vital and thriving disciplines within philosophy. Although the philosophy of biology is still marked by a concentration on evolutionary theory as opposed to other subjects in the life sciences, this may simply reflect the fact that evolution is an especially interesting and fertile topic for philosophical analysis.

Topics in the philosophy of biology
 Without doubt, the chief event in the history of evolutionary theory was the publication in 1859 of On the Origin of Species, by Charles Darwin (Darwin, Charles) (1809–82). Arguing for the truth of evolutionary theory may be conceived as involving three tasks: namely, establishing the fact of evolution—showing that it is reasonable to accept a naturalistic, or law-bound, developmental account of life's origins; identifying, for various different species, the particular path, or phylogeny, through which each evolved; and ascertaining a cause or mechanism of evolutionary change. In On the Origin of Species, Darwin accomplished the first and the third of these tasks (he seemed, in this and subsequent works, not to be much interested in the second). His proposal for the mechanism of evolutionary change was natural selection, popularly known as “survival of the fittest.” Selection comes about through random and naturally occurring variation in the physical features of organisms and through the ongoing competition within and between species for limited supplies of food and space. Variations that tend to benefit an individual (or a species) in the struggle for existence are preserved and passed on (“selected”), because the individuals (or species) that have them tend to survive.

 The notion of natural selection was controversial in Darwin's time, and it remains so today. The major early objection was that the term is inappropriate: if Darwin's basic point is that evolutionary change takes place naturally, without divine intervention, why should he use a term that implies a conscious choice or decision on the part of an intelligent being? Darwin's response was that the term natural selection is simply a metaphor, no different in kind from the metaphors used in every other branch of science. Some contemporary critics, however, have objected that, even treated as a metaphor, “natural selection” is misleading. One form of this objection comes from philosophers who dislike the use of any metaphor in science—because, they allege, metaphorical description in some sense conceals what is objectively there—while another comes from philosophers who merely dislike the use of this particular metaphor.

      The Darwinian response to the first form of the objection is that metaphors in science are useful and appropriate because of their heuristic role. In the case of “natural selection,” the metaphor points toward, and leads one to ask questions about, features that have adaptive value—that increase the chances that the individual (or species) will survive; in particular it draws attention to how the adaptive value of the feature lies in the particular function it performs.

      The second form of the objection is that the metaphor inclines one to see function and purpose where none in fact exist. The Darwinian response in this case is to acknowledge that there are indeed examples in nature of features that have no function or of features that are not optimally adapted to serve the function they apparently have. Nevertheless, it is not a necessary assumption of evolutionary theory that every feature of every organism is adapted to some purpose, much less optimally adapted. As an investigative strategy, however, the assumption of function and purpose is useful, because it can help one to discover adaptive features that are subtle or complex or for some other reason easy to overlook. As Kuhn insisted, the benefit of good intellectual paradigms is that they encourage one to keep working to solve puzzles even when no solution is in sight. The best strategy, therefore, is to assume the existence of function and purpose until one is finally forced to conclude that none exists. It is a bigger intellectual sin to give up looking too early than to continue looking too long.

      Although the theory of evolution by natural selection was first published by Darwin, it was first proposed by Darwin's colleague, the British naturalist Alfred Russel Wallace (Wallace, Alfred Russel) (1823–1913). At Wallace's urging, later editions of On the Origin of Species used a term coined by Herbert Spencer (Spencer, Herbert) (1820–1903), survival of the fittest, in place of natural selection. This substitution, unfortunately, led to countless (and continuing) debates about whether the thesis of natural selection is a substantive claim about the real world or simply a tautology (a statement, such as “All bachelors are unmarried,” that is true by virtue of its form or the meaning of its terms). If the thesis of natural selection is equivalent to the claim that those that survive are the fittest, and if the fittest are identified as those that survive, then the thesis of natural selection is equivalent to the claim that those that survive are those that survive—true indeed, but hardly an observation worthy of science.

      Defenders of Darwin have issued two main responses to this charge. The first, which is more technically philosophical, is that, if one favours the semantic view of theories, then all theories are made of models that are in themselves a priori—that is, not, as such, claims about the real world but rather idealized pictures of it. To fault selection claims on these grounds is therefore unfair, because, in a sense, all scientific claims start in this way. It is only when one begins to apply models, seeing if they are true of reality, that empirical claims come into play. There is no reason why this should be less true of selection claims than of any other scientific claims. One could claim that camouflage is an important adaptation, but it is another matter actually to claim (and then to show) that dark animals against a dark background do better than animals of another colour.

      The second response to the tautology objection, which is more robustly scientific, is that no Darwinian has ever claimed that the fittest always survive; there are far too many random events in the world for such a claim to be true. However fit an organism may be, it can always be struck down by lightning or disease or any kind of accident. Or it may simply fail to find a mate, ensuring that whatever adaptive feature it possesses will not be passed on to its progeny. Indeed, work by the American population geneticist Sewall Wright (Wright, Sewall) (1889–1989) has shown that, in small populations, the less fit might be more successful than the more fit, even to the extent of replacing the more fit entirely, owing to random but relatively significant changes in the gene pool, a phenomenon known as genetic drift.

      What the thesis of natural selection, or survival of the fittest, really claims, according to Darwinians, is not that the fittest always survive but that, on average, the more fit (or the fittest) are more successful in survival and reproduction than the less fit (or unfit). Another way of putting this is to say that the fit have a greater propensity toward successful survival and reproduction than the less fit.

      Undoubtedly part of the problem with the thesis of natural selection is that it seems to rely on an inductive generalization regarding the regularity of nature (see induction). Natural selection can serve as a mechanism of evolutionary change, in other words, only on the assumption that a feature that has adaptive value to an individual in a given environment—and is consequently passed on—also will have value to other individuals in similar environments. This assumption is apparently one of the reasons why philosophers who are skeptical of inductive reasoning—as was Popper—tend not to feel truly comfortable with the thesis of natural selection. Setting aside the general problem of induction (induction, problem of), however, one may ask whether the particular assumption on which the thesis of natural selection relies is rationally justified. Some philosophers and scientists, such as the evolutionary biologist Richard Dawkins, think not only that it is justified but that a much stronger claim also is warranted: namely, that wherever life occurs—on this planet or any other—natural selection will occur and will act as the main force of evolutionary change. In Dawkins's view, natural selection is a natural law.

      Other philosophers and scientists, however, are doubtful that there can be any laws in biology, even assuming there are laws in other areas of science. Although they do not reject inductive inference per se, they believe that generalizations in biology must be hedged with so many qualifications that they cannot have the necessary force one thinks of as characteristic of genuine natural laws. (For example, the initially plausible generalization that all mammals give birth to live young must be qualified to take into account the platypus.) An intermediate position is taken by those who recognize the existence of laws in biology but deny that natural selection itself is such a law. Darwin certainly thought of natural selection as a law, very much like Newton's law of gravitational attraction; indeed, he believed that selection is a force that applies to all organisms, just as gravity is a force that applies to all physical objects. Critics, however, point out that there does not seem to be any single phenomenon that could be identified as a “force” of selection. If one were to look for such a force, all one would actually see are individual organisms living and reproducing and dying. At best, therefore, selection is a kind of shorthand for a host of other processes, which themselves may or may not be governed by natural laws.

      In response, defenders of selection charge that these critics are unduly reductionistic. In many other areas of science, they argue, it is permissible to talk of certain phenomena as if they were discrete entities, even though the terms involved are really nothing more than convenient ways of referring, at a certain level of generality, to complex patterns of objects or events. If one were to look for the pressure of a gas, for example, all one would actually see are individual molecules colliding with each other and with the walls of their container. But no one would conclude from this that there is no such thing as pressure. Likewise, the fact that there is nothing to see beyond individual organisms living and reproducing and dying does not show that there is no such thing as selection.

Levels of selection
      Darwin held that natural selection operates at the level of the individual. Adaptive features are acquired by and passed on to individual organisms, not groups or species, and they benefit individual organisms directly and groups or species only incidentally. One type of case, however, did cause him worry: in nests of social insects, there are always some members (the sterile workers) who devote their lives entirely to the well-being of others. How could a feature for self-sacrifice be explained, if adaptive features are by definition beneficial to the individual rather than to the group? Eventually Darwin decided that the nest as a whole could be treated as a kind of superorganism, with the individual members as parts; hence the individual benefiting from adaptation is the nest rather than any particular insect.

      Wallace differed from Darwin on this question, arguing that selection sometimes operates at the level of groups and hence that there can be adaptive features that benefit the group at the expense of the individual. When two groups come into conflict, members of each group will develop features that help them to benefit other group members at their own expense (i.e., they become altruists). When one group succeeds and the other fails, the features for altruism developed in that group are selected and passed on. For the most part Darwin resisted this kind of thinking, though he made a limited exception for one kind of human behaviour, allowing that morality, or ethics, could be the result of group selection rather than individual selection. But even in this case he was inclined to think that benefits at the level of individuals might actually be more important, since some kinds of altruistic behaviour (such as grooming) tend to be reciprocated.

      Several evolutionary theorists after Darwin took for granted that group selection is real and indeed quite important, especially in the evolution of social behaviour. Konrad Lorenz (Lorenz, Konrad) (1903–89), the founder of modern ethology, and his followers made this assumption the basis of their theorizing. A minority of more-conservative Darwinians, meanwhile—notably Ronald Aylmer Fisher (Fisher, Sir Ronald Aylmer) (1890–1962) and J.B.S. Haldane (Haldane, J.B.S.) (1892–1964)—resisted such arguments. In the 1960s, the issue came to a fore, and for a while group selection was dismissed entirely. Some theorists, notably the American evolutionary biologist George C. Williams, argued that individual interests would always outweigh group interests, since genes associated with selfish behaviour would inevitably spread at the expense of genes associated with altruism. Other researchers showed how apparent examples of group selection could be explained in individualistic terms. Most notably, the British evolutionary biologist W.D. Hamilton (1936–2000) showed how social behaviour in insects can be explained as a form of “kin selection” beneficial to individual interests. In related work, Hamilton's colleague John Maynard Smith (1920–2004) employed the insights of game theory to explain much social interaction from the perspective of individual selection.

      Throughout these debates, however, no one denied the possibility or even the actuality of group selection—the issue was rather its extent and importance. Fisher, for example, always supposed that reproduction through sex must be explained in such a fashion. (Sexual reproduction benefits the group because it enables valuable features to spread rapidly, but it generally benefits the individual mother little or not at all.) In the 1970s the group-selection perspective enjoyed a resurgence, as new models were devised to show that many situations formerly understood solely in terms of individual interests could be explained in terms of group interests as well. The American entomologist Edward O. Wilson (Wilson, Edward O.), later recognized as one of the founders of sociobiology, argued that ants of the genus Pheidole are so dependent upon one another for survival that Darwin's original suggestion about them was correct: the nest is a superorganism, an individual in its own right. Others argued that only a group-selection perspective is capable of explaining certain kinds of behaviour, especially human moral behaviour. This was the position of the American biologist David S. Wilson (no relation to Edward O. Wilson) and the American philosopher Elliott Sober.

      In some respects the participants in these debates have been talking past each other. Should a pair of organisms competing for food or space be regarded as two individuals struggling against each other or as a group exhibiting internal conflict? Depending on the perspective one takes, such situations can be seen as examples of either individual or group selection. A somewhat more significant issue arose when some evolutionary theorists in the early 1970s began to argue that the level at which selection truly takes place is that of the gene. The “genic selection” approach was initially rejected by many as excessively reductionistic. This hostility was partly based on misunderstanding, which is now largely removed thanks to the efforts of some scholars to clarify what genic selection can mean. What it cannot mean—or, at least, what it can only rarely mean—is that genes compete against each other directly. Only organisms engage in direct competition. Genes can play only the indirect role of encoding and transmitting the adaptive features that organisms need to compete successfully. Genic selection therefore amounts to a kind of counting, or ledger keeping, insofar as it results in a record of the relative successes and failures of different kinds of genes. In contrast, “organismic selection,” as it may be called, refers to the successes and failures at the level of the organism. Both genic and organismic selection are instances of individual selection, but the former refers to the “replicators”—the carriers of heredity—and the latter to the “vehicles”—the entities in which the replicators are packaged.

      Could there be levels of selection even higher than the group? Could there be “species selection”? This was the view of the American paleontologist Stephen Jay Gould (Gould, Stephen Jay) (1941–2002), who argued that selection at the level of species is very important in macroevolution—i.e., the evolution of organisms over very long periods of time (millions of years). It is important to understand that Gould's thesis was not simply that there are cases in which the members of a successful species possess a feature that the members of a failed species do not and that possession of the feature makes the difference between success and failure. Rather, he claimed that species can produce emergent features—features that belong to the species as a whole rather than to individual members—and that these features themselves can be selected for.

      One example of such a feature is reproductive isolation, a relation between two or more groups of organisms that obtains when they cannot interbreed (e.g., human beings and all other primates). Gould argued that reproductive isolation could have important evolutionary consequences, insofar as it delimits the range of features (adaptive or otherwise) that members of a given species may acquire. Suppose the members of one species are more likely to wander around the area in which they live than are members of another species. The first species could be more prone to break up and speciate than the second species. This in turn might lead to greater variation overall in the descendants of the first species than in the descendants of the second, and so forth. Critics responded that, even if this is possibly so, the ultimate variation seems not to have come about because it was useful to anyone but rather as an accidental by-product of the speciation process—a by-product of wandering. To this Gould replied that perhaps species selection does not in itself promote adaptation at any level, even the highest. Naturally, to conventional Darwinians this was so unsatisfactory a response that they were inclined to withhold the term “selection” from the whole process, whether or not it could be said to exist and to be significant.

Testing
      One of the oldest objections to the thesis of natural selection is that it is untestable. Even some of Darwin's early supporters, such as the English biologist T.H. Huxley (Huxley, T.H.) (1825–95), expressed doubts on this score. A modern form of the objection was raised in the early 1960s by the British historian of science Martin Rudwick, who claimed that the thesis is uncomfortably asymmetrical. Although it can be tested positively, since features found to have adaptive value count in its favour, it cannot really be tested negatively, since features found not to have adaptive value tend to be dismissed as not fully understood or as indicative of the need for further work. Too often and too easily, according to the objection, supporters of natural selection simply claim that, in the fullness of time, apparent counterexamples will actually prove to support their thesis, or at least not to undermine it.

      Naturally enough, this objection attracted the sympathetic attention of Popper (Popper, Sir Karl), who had proposed a principle of “falsifiability” as a test of whether a given hypothesis is genuinely empirical (and therefore scientific). According to Popper, it is the mark of a pseudoscience that its hypotheses are not open to falsification by any conceivable test. He concluded on this basis that evolutionary theory is not a genuine science but merely a “metaphysical research programme.”

      Supporters of natural selection responded, with some justification, that it is simply not true that no counterevidence is possible. They acknowledged that some features are obviously not adaptive in some respects: in human beings, for example, walking upright causes chronic pain in the lower back, and the size of the infant's head relative to that of the birth canal causes great pain for females giving birth.

      Nevertheless, the fact is that evolutionary theorists must often be content with less than fully convincing evidence when attempting to establish what the adaptive value—if any—of a particular feature may be. Ideally, investigations of this sort would trace phylogenies and check genetic data to establish certain preliminary adaptive hypotheses, then test the hypotheses in nature and in laboratory experiments. In many cases, however, only a few avenues of testing will be available to researchers. Studies of dinosaurs, for example, cannot rely to any significant extent upon genetic evidence, and the scope for experiment is likewise very limited and necessarily indirect. A defect that is liable to appear in any investigation in which the physical evidence available is limited to the structure of the feature in question—perhaps in the form of fossilized bones—is the circular use of structural evidence to establish a particular adaptive hypothesis that one has already decided is plausible; other possible adaptations, just as consistent with the limited evidence available, are ignored. Although in these cases a certain amount of inference in reverse—in which one begins with a hypothesis that seems plausible and sees whether the evidence supports it—is legitimate and even necessary, some critics, including the American morphologist George Lauder, have contended that the pitfalls of such reasoning have been insufficiently appreciated by evolutionary theorists.

      Various methods have been employed to improve the soundness of tests used to evaluate adaptive hypotheses. The “comparative method,” which involves considering evidence drawn from a wide range of similar organisms, was used in a study of the relatively large size of the testicles of chimpanzees as compared to those of gorillas. The adaptive hypothesis was that, given that the average female chimpanzee has several male sexual partners, a large sperm production, and therefore large testicles, would be an adaptive advantage for an individual male competing with other males to reproduce. The hypothesis was tested by comparing the sexual habits of chimpanzees with those of gorillas and other primates: if testicle size was not correlated with the average number of male sexual partners in the right way, the hypothesis would be disproved. In fact, however, the study found that the hypothesis was well supported by the evidence.

      A much more controversial method is the use of so-called “optimality models.” The researcher begins by assuming that natural selection works optimally, in the sense that the feature (or set of features) eventually selected represents the best adaptation for performing the function in question. For any given function, then, the researcher checks to see whether the feature (or set of features) is indeed the best adaptation possible. If it is, then “optimal adaptation” is partially confirmed; if it is not, then either optimal adaptation is partially disconfirmed, or the function being performed has been misunderstood, or the background assumptions are faulty.

      Not surprisingly, some critics have objected that optimality models are just another example of the near-circular reasoning that has characterized evolutionary theorizing from the beginning. Whether this is true or not, of course, depends on what one takes the studies involving optimality models to prove. John Maynard Smith, for one, denies that they constitute proof of optimal adaptation per se. Rather, optimal adaptation is assumed as something like a heuristic, and the researcher then goes on to try to uncover particular adaptations at work in particular situations. This way of proceeding does not preclude the possibility that particular adaptive hypotheses will turn out to be false. Other researchers, however, argue that the use of optimality models does constitute a test of optimal adaptation; hence, the presence of disconfirming evidence must be taken as proof that optimal adaptation is incorrect.

      As most researchers use them, however, optimality models seem to be neither purely heuristic nor purely empirical. They are used as something like a background assumption, but their details are open to revision if they prove inconsistent with empirical evidence. Thus their careful use does not constitute circular reasoning but a kind of feedback, in which one makes adjustments in the premises of the argument as new evidence warrants, the revised premises then indicating the kind of additional evidence one needs to look for. This kind of reasoning is complicated and difficult, but it is not fallacious.

      A major topic in many fields of philosophy, but especially in the philosophy of science, is reductionism. There are at least three distinct kinds of reductionism: ontological, methodological, and theoretical. Ontological reductionism is the metaphysical doctrine that entities of a certain kind are in reality collections or combinations of entities of a simpler or more basic kind. The pre-Socratic doctrine that the physical world is ultimately composed of different combinations of a few basic elements—e.g., earth, air, fire, and water—is an example of ontological reductionism. Methodological reductionism is the closely related view that the behaviour of entities of a certain kind can be explained in terms of the behaviour or properties of entities of another (usually physically smaller) kind. Finally, theoretical reductionism is the view in the philosophy of science that the entities and laws posited in older scientific theories can be logically derived from newer scientific theories, which are therefore in some sense more basic.

      Since the decline of vitalism, which posited a special nonmaterial life force, ontological reductionism has been nearly universally accepted by philosophers and scientists, though a small number have advocated some form of mind-body dualism (mind–body dualism), among them Karl Popper and the Australian physiologist and Nobel laureate John Eccles (Eccles, Sir John Carew) (1903–97). Methodological reductionism also has been universally accepted since the scientific revolution of the 17th century, and in the 20th century its triumphs were outstanding, particularly in molecular biology.

      The logical positivists of the 20th century advocated a thorough-going form of theoretical reductionism according to which entire fields of physical science are reducible, in principle, to other fields, in particular to physics. The classic example of theoretical reduction was understood to be the derivation of Newtonian mechanics from Einstein's theories of special and general relativity. The relationship between the classic theory of genetics proposed by Gregor Mendel (Mendel, Gregor) (1822–84) and modern molecular genetics also seemed to be a paradigmatic case of theoretical reduction. In the older theory, laws of segregation and independent assortment, among others, were used to explain macroscopic physical characteristics like size, shape, and colour. These laws were derived from the laws of the newer theory, which governed the formation of genes and chromosomes (chromosome) from molecules of DNA and RNA, by means of “bridge principles” that identified entities in the older theory with entities (or combinations of entities) in the newer one, in particular the Mendelian unit of heredity with certain kinds of DNA molecule. By being reduced in this way, Mendelian genetics was not replaced by molecular genetics but rather absorbed by it.

      In the 1960s the reductionist program of the logical positivists came under attack by Thomas Kuhn (Kuhn, Thomas S.) and his followers, who argued that, in the history of science, the adoption of new “paradigms,” or scientific worldviews, generally results in the complete replacement rather than the reduction of older theories. Kuhn specifically denied that Newtonian mechanics had been reduced by relativity. Philosophers of biology, meanwhile, advanced similar criticisms of the purported reduction of Mendelian genetics by molecular genetics. It was pointed out, for example, that in many respects the newer theory simply contradicted the older one and that, for various reasons, the Mendelian gene could not be identified with the DNA molecule. (One reason was that Mendel's gene was supposed to be indivisible, whereas the DNA molecule can be broken at any point along its length, and in fact molecular genetics assumes that such breaking takes place.) Some defenders of reductionism responded to this criticism by claiming that the actual object of reduction is not the older theory of historical fact but a hypothetical theory that takes into account the newer theory's strengths—something the Hungarian-born British philosopher Imré Lakatos (1922–74) called a “rational reconstruction.”

      Philosophical criticism of genetic reductionism persisted, however, culminating in the 1980s in a devastating critique by the Australian philosopher Philip Kitcher, who denied the possibility, in practice and in principle, of any theoretical reduction of the sort envisioned by the logical positivists. In particular, no scientific theory is formalized as a hypothetico-deductive system as the positivists had contended, and there are no genuine “bridge principles” linking entities of older theories to entities of newer ones. The reality is that bits and pieces of newer theories are used to explain, extend, correct, or supplement bits and pieces of older ones. Modern genetics, he pointed out, uses molecular concepts but also original Mendelian ones; for example, molecular concepts are used to explain, not to replace, the Mendelian notion of mutation. The straightforward logical derivation of older theories from newer ones is simply a misconception.

      Reductionism continues to be defended by some philosophers, however. Kitcher's former student C. Kenneth Waters, for example, argues that the notion of reduction can be a source of valuable insight into the relationships between successive scientific theories. Moreover, critics of reductionism, he contends, have focused on the wrong theories. Although strict Mendelian genetics is not easily reduced by the early molecular genetics of the 1950s, the much richer classical theory of the gene, as developed by the American Nobel laureate Thomas Hunt Morgan (Morgan, Thomas Hunt) (1866–1945) and others in the 1910s, comes close to being reducible by the sophisticated molecular genetics of recent decades; the connections between the latter two theories are smoothly derivative in a way that would have pleased the logical positivists. The ideal of a complete reduction of one science by another is out of reach, but reduction on a smaller scale is possible in many instances.

      At least part of this controversy arises from the contrasting visions of descriptivists and prescriptivists in the philosophy of science. No one on either side of the debate would deny that theoretical reduction in a pure form has never occurred and never will. On the other hand, the ideal of theoretical reduction can be a useful perspective from which to view the development of scientific theories over time, yielding insights into their origins and relationships that might otherwise not be apparent. Many philosophers and scientists find this perspective attractive and satisfying, even as they acknowledge that it fails to describe scientific theories as they really are.

Form and function
      Evolutionary biology is faced with two major explanatory problems: form and function. How is it possible to account for the forms of organisms and their parts and in particular for the structural similarities between organisms? How is it possible to account for the ways in which the forms of organisms and their parts seem to be adapted to certain functions? These topics are much older than evolutionary theory itself, having preoccupied Aristotle and all subsequent biologists. The French zoologist Georges Cuvier (Cuvier, Georges, Baron) (1769–1832), regarded as the father of modern comparative anatomy, believed that function is more basic than form; form emerges as a consequence of function. His great rival, Étienne Geoffroy Saint-Hilaire (Geoffroy Saint-Hilaire, Étienne) (1772–1844), was enthused by form and downplayed function. Darwin, of course, was always more interested in function, and his thesis of natural selection was explicitly directed at the problem of explaining functional adaptation. Although he was certainly not unaware of the problem of form—what he called the “unity of type”—like Cuvier he thought that form was a consequence of function and not something requiring explanation in its own right.

      One of the traditional tools for studying form is embryology, since early stages of embryonic development can reveal aspects of form, as well as structural relationships with other organisms, that later growth conceals. As a scientist Darwin was in fact interested in embryology, though it did not figure prominently in the argument for evolution presented in On the Origin of Species. Subsequent researchers were much more concerned with form and particularly with embryology as a means of identifying phylogenetic histories and relationships. But with the incorporation of Mendelian and then molecular genetics into the theory of evolution starting in the early 20th century, resulting in what has come to be known as the “synthetic theory,” function again became preeminent, and interest in form and embryology declined.

      In recent years the pendulum has begun to swing once again in the other direction. There is now a vital and flourishing school of evolutionary development, often referred to as “evo-devo,” and along with it a resurgence of interest in form over function. Many researchers in evo-devo argue that nature imposes certain general constraints on the ways in which organisms may develop, and therefore natural selection, the means by which function determines form, does not have a free hand. The history of evolutionary development reflects these limitations.

      There are various levels at which constraints might operate, of course, and at certain levels the existence of constraints of one kind or another is not disputed. No one would deny, for example, that natural selection must be constrained by the laws of physics and chemistry. Since the volume (and hence weight) of an animal increases by the cube of its length, it is physically impossible for an elephant to be as agile as a cat, no matter how great an adaptive advantage such agility might provide. It is also universally agreed that selection is necessarily constrained by the laws of genetics.

      The more contentious cases arise in connection with apparent constraints on more specific kinds of functional adaptation. In a celebrated article with Richard Lewontin, Gould argued that structural constraints on the adaptation of certain features inevitably result in functionally insignificant by-products, which he compared to the spandrels (spandrel) in medieval churches—the roughly triangular areas above and on either side of an arch. Biological spandrels, such as the pseudo-penis of the female hyena, are the necessary result of certain adaptations but serve no useful purpose themselves. Once in the population, however, they persist and are passed on, often becoming nearly universal patterns or archetypes, what Gould referred to as Baupläne (German: “body plans”).

      According to Gould, other constraints operating at the molecular level represent deeply rooted similarities between animals that themselves may be as distant from each other as human beings and fruit flies. Humans have in common with fruit flies certain sequences of DNA, known as “homeoboxes,” that control the development and growth of bodily parts—determining, for example, where limbs will grow in the embryo. The fact that homeoboxes apparently operate independently of selection (since they have persisted unchanged for hundreds of millions of years) indicates that, to an important extent, form is independent of function.

      These arguments have been rejected by more-traditional Darwinists, such as John Maynard Smith and George C. Williams. It is not surprising, they insist, that many features of organisms have no obvious function, and in any case one must not assume too quickly that any apparent Bauplän is completely nonfunctional. Even if it has no function now, it may have had one in the past. A classic example of a supposedly nonadaptive Bauplän is the four-limbedness of vertebrates. Why do humans have four limbs rather than six, like insects? Maynard Smith and Williams agree that four-limbedness serves no purpose now. But when vertebrates were aquatic creatures, two limbs fore and two limbs hind was of great value for moving upward and downward in water. The same point applies at the molecular level. If homeoboxes did not work as well as they do, selection would soon have begun tampering with them. The fact that something does not change does not mean that it is not functional or that it is immune to selective pressure. Indeed, there is evidence that, in some cases and as the need arises, even the most basic and most long-lived of molecular strands can change quite rapidly, in evolutionary terms.

      The Scottish morphologist D'Arcy Wentworth Thompson (Thompson, Sir D'Arcy Wentworth) (1860–1948) advocated a form of antifunctionalism even more radical than Gould's, arguing that adaptation was often incorrectly attributed to certain features of organisms only because evolutionary theorists were ignorant of the relevant physics or mathematics. The dangling form of the jellyfish, for example, is not adaptive in itself but is simply the result of placing a relatively dense but amorphous substance in water. Likewise, the spiral pattern, or phyllotaxis, exhibited by pine cones or by the petals of a sunflower is simply the result of the mathematical properties of lattices. More-recent thinkers in this tradition, notably Stuart Kauffman in the United States and Brian Goodwin in Britain, argue that a very great deal of organic nature is simply the expression of form and is only incidentally functional.

      Defenders of function have responded to this criticism by claiming that it raises a false opposition. They naturally agree that physics and mathematics are important but insist that they are only part of the picture, since they cannot account for everything the evolutionary theorist is interested in. The fact that the form of the jellyfish is the result of the physics of fluids does not show that the form itself is not an adaptation—the dense and amorphous properties of the jellyfish could have been selected precisely because, in water, they result in a form that has adaptive value. Likewise for the shapes of pinecones and sunflower petals. The issue, therefore, is whether natural selection can take advantage of those physical properties of features that are specially determined by physics and mathematics. Even if there are some cases in which “order is for free,” as the antifunctionalists like to claim, there is no reason why selection cannot make use of it in one way or another. Jellyfish and sunflowers, after all, are both very well adapted to their environments.

 A distinctive characteristic of the biological sciences, especially evolutionary theory, is their reliance on teleological language, or language expressive of a plan, purpose, function, goal, or end, as in: “The purpose of the plates on the spine of the Stegosaurus was to control body temperature.” In contrast, one does not find such language in the physical sciences. Astronomers do not ask, for example, what purpose or function the Moon serves (though many a wag has suggested that it was designed to light the way home for drunken philosophers). Why does biology have such language? Is it undesirable, a mark of the weakness of the life sciences? Can it be eliminated?

      As noted above, Aristotle provided a metaphysical justification of teleological language in biology by introducing the notion of final causality, in which reference to what will exist in the future is used to explain what exists or is occurring now. The great Christian philosophers of late antiquity and the Middle Ages, especially Augustine (Augustine, Saint) (354–430) and Thomas Aquinas (Aquinas, Thomas, Saint) (c. 1224–74), took the existence of final causality in the natural world to be indicative of its design by God. The eye serves the end of sight because God, in his infinite wisdom, understood that animals, human beings especially, would be better off with sight than without it. This perspective was commonplace among all educated people—not only philosophers, theologians, and scientists—until the middle of the 19th century and the publication of Darwin's On the Origin of Species. Although Darwin himself was not an atheist (he was probably sympathetic to Deism, believing in an impersonal god who created the world but did not intervene in it), he did wish to remove religion and theology from biology. One might expect, therefore, that the dissemination and acceptance of the theory of evolution would have had the effect of removing teleological language from the biological sciences. But in fact the opposite occurred: one can ask just as sensibly of a Darwinian as of a Thomist what end the eye serves.

      In the first half of the 20th century many philosophers and scientists, convinced that teleological explanations were inherently unscientific, made attempts to eliminate the notion of teleology from the biological sciences, or at least to interpret references to it in scientifically more acceptable terms. After World War II, intrigued by the example of weapons, such as torpedoes, that could be programmed to track their targets, some logical positivists suggested that teleology as it applies to biological systems is simply a matter of being “directively organized,” or “goal-directed,” in roughly the same way as a torpedo. (It is important to note that this sense of goal-directedness means not just being directed toward a goal but also having the capacity to respond appropriately to potentially disruptive change.) Biological organisms, according to this view, are natural goal-directed objects. But this fact is not really very remarkable or mysterious, since all it means is that organisms are natural examples of a system of a certain well-known kind.

      However, as pointed out by the embryologist C.H. Waddington (Waddington, C.H.) (1905–75), the biological notion of teleology seems not to be fully captured by this comparison, since the “adaptability” implied by goal-directedness is not the same as the “adaptation” or “adaptedness” evident in nature. The eye is not able to respond to change in the same way, or to the same extent, as a target-seeking torpedo; still, the structure of the eye is adapted to the end of sight. Adaptedness in this sense seems to be possible only as a result of natural selection, and the goal-directedness of the torpedo has nothing to do with that. Despite such difficulties, philosophers in the 1960s and '70s continued to pursue interpretations of biological teleology that were essentially unrelated to selection. Two of the most important such efforts were the “capacity” approach and the “etiological” approach, developed by the American philosophers Robert Cummins and Larry Wright, respectively.

      According to Cummins, a teleological system can be understood as one that has the capacity to do certain things, such as generate electricity or maintain body temperature (or ultimately life). The parts of the system can be thought of as being functional or purposeful in the sense that they contribute toward, or enable, the achievement of the system's capacity or capacities. Although many scientists have agreed that Cummins has correctly described the main task of morphology—to identify the individual functions or purposes of the parts of biological systems—his view does not seem to explicate teleology in the biological sense, since it does not treat purposefulness as adaptedness, as something that results from a process of selection. (It should be noted that Cummins probably would not regard this point as a criticism, since he considers his analysis to be aimed at a somewhat more general notion of teleology.)

      The etiological approach, though developed in the 1970s, was in fact precisely the same as the view propounded by Kant in his Critique of Judgment (1790). In this case, teleology amounts to the existence of causal relations in which the effect explains or is responsible for the cause. The serrated edge of a knife causes the bread to be cut, and at the same time the cutting of the bread is the reason for the fact that the edge of the knife is serrated. The eye produces vision, and at the same time vision is the reason for the existence of the eye. In the latter case, vision explains the existence of the eye because organisms with vision—through eyes or proto-eyes—do better in the struggle for survival than organisms without it; hence vision enables the creation of newer generations of organisms with eyes or proto-eyes.

      There is one other important component of the etiological approach. In a causal relation that is truly purposeful, the effect must be in some sense good or desired. A storm may cause a lake to fill, and in some sense the filling of the lake may be responsible for the storm (through the evaporation of the water it contains), but one would not want to say that the purpose of the storm is to fill the lake. As Plato noted in his dialogue the Phaedo, purpose is appropriate only in cases in which the end is good.

      The etiological approach interprets the teleological language of biology in much the same way Kant did—i.e., as essentially metaphorical. The existence of a kind of purposefulness in the eye does not license one to talk of the eye's designer, as the purposefulness of a serrated edge allows one to talk of the designer of a knife. (Kant rejected the teleological argument for the existence of God, also known as the argument from design.) But it does allow one to talk of the eye as if it were, like the knife, the result of design. Teleological language, understood metaphorically, is therefore appropriate to describe parts of biological organisms that characteristically seem as if they were designed with the good of the organism in mind, though they were not actually designed at all.

      Although it is possible to make sense of teleological language in biology, some philosophers still think that the science would be better off without it. Most, however, believe that attempting to eliminate it altogether would be going too far. In part their caution is influenced by recent philosophy of science, which has emphasized the important role that language, and particularly metaphor, has played in the construction and interpretation of scientific theories. In addition, there is a widespread view in the philosophy of language (language, philosophy of) and the philosophy of mind (mind, philosophy of) that human thinking is essentially and inevitably metaphorical. Most importantly, however, many philosophers and scientists continue to emphasize the important heuristic role that the notion of teleology plays in biological theorizing. By treating biological organisms teleologically, one can discover a great deal about them that otherwise would be hidden from view. If no one had asked what purpose the plates of the Stegosaurus serve, no one would have discovered that they do indeed regulate the animal's body temperature. And here lies the fundamental difference between the biological and the physical sciences: the former, but not the latter, studies things in nature that appear to be designed. This is not a sign of the inferiority of biology, however, but only a consequence of the way the world is. Biology and physics are different, and so are men and women. The French have a phrase to celebrate this fact.

The species problem
      One of the oldest problems in philosophy is that of universals (universal). The world seems to be broken up into different kinds of things. But what are these kinds, assuming they are distinct from the things that belong to them? Historically, some philosophers, known as realists (realism), have held that kinds are real, whether they inhere in the individuals to which they belong (as Aristotle argued) or are independent of physical reality altogether (as Plato argued; see form). Other philosophers, known as nonrealists but often referred to as nominalists, after the medieval school ( nominalism), held that there is nothing in reality over and above particular things. Terms for universals, therefore, are just names. Neither position, in its pure form, seems entirely satisfactory: if universals are real, where are they, and how does one know they exist? If they are just names, without any connection to reality, how do people know how to apply them, and why, nevertheless, do people apply them in the same way?

      In the 18th century the philosophical debate regarding universals began to be informed by advances in the biological sciences, particularly the European discovery of huge numbers of new plant and animal species in voyages of exploration and colonization to other parts of the world. At first, from a purely scientific perspective, the new natural kinds indicated the need for a system of classification capable of making sense of the great diversity of living things, a system duly supplied by the great Swedish taxonomist Carolus Linnaeus (Linnaeus, Carolus) (1707–78). In the early 19th century Jean-Baptiste Lamarck (Lamarck, Jean-Baptiste) (1744–1829) proposed a system that featured the separate classification of vertebrates (vertebrate) and invertebrates (invertebrate). Cuvier went further, arguing for four divisions, or embranchements, in the animal world: vertebrates, mollusks (mollusk), articulates (arthropods (arthropod)), and radiates (animals with radial symmetry). All agreed, however, that there is one unit of classification that seems more fundamental or real than any other: the species. If species are real features of nature and not merely artefacts of human classifiers, then the question arises how they came into being. The only possible naturalistic answer—that they evolved over millions of years from more-primitive forms—leads immediately to a severe difficulty: how is it possible to define the species to which a given animal belongs in such a way that it does not include every evolutionary ancestor the animal had but at the same time is not arbitrary? At what point in the animal's evolutionary history does the species begin? This is the “species problem,” and it is clearly as much philosophical as it is scientific.

      The problem in fact involves two closely related issues: (1) how the notion of a species is to be defined, and (2) how species are supposedly more fundamental or real than other taxonomic categories. The most straightforward definition of species relies on morphology and related features: a species is a group of organisms with certain common features, such as hairlessness, bipedalism, and rationality (reason). Whatever features the definition of a particular species may include, however, there will always be animals that seem to belong to the species but that lack one or more of the features in question. Children and the severely retarded, for example, lack rationality, but they are undeniably human. One possible solution, which has roots in the work of the French botanist Michel Adanson (Adanson, Michel) (1727–1806) and was advocated by William Whewell in the 19th century, is to define species in terms of a group of features, a certain number of which is sufficient for membership but no one of which is necessary.

      Another definition, advocated in the 18th century by Buffon, emphasizes reproduction. A species is a group of organisms whose members interbreed and are reproductively isolated from all other organisms. This view was widely accepted in the first half of the 20th century, owing to the work of the founders of the synthetic theory of evolution (see above Form and function (nature, philosophy of)), especially the Ukrainian-born American geneticist Theodosius Dobzhansky (Dobzhansky, Theodosius) (1900–75) and the German-born American biologist Ernst Mayr (Mayr, Ernst) (1904–2005). However, it encounters difficulties with asexual organisms and with individual animals that happen to be celibate. Although it is possible to expand the definition to take into account the breeding partners an animal might have in certain circumstances, the philosophical complications entailed by this departure are formidable. The definition also has trouble with certain real-world examples, such as spatial distributions of related populations known as “rings of races.” In these cases, any two populations that abut each other in the ring are able to interbreed, but the populations that constitute the endpoints of the ring cannot—even though they, too, abut each other. Does the ring constitute one species or two? The same problem arises with respect to time: since each generation of a given population is capable of interbreeding with members of the generation that immediately preceded it, the two generations belong to the same species. If one were to trace the historical chain of generations backward, however, at some point one would arrive at what seems to be a different species. Even if one were reluctant to count very distant generations as different species, there would still be the obvious problem that such generations, in all likelihood, would not be able to interbreed.

      The second issue, what makes the notion of a species fundamental, has elicited several proposals. One popular view is that species are not groups but individuals, rather like super-organisms. The particular organisms identified as their “members” should really be thought of as their “parts.” Another suggestion relies on what William Whewell called a “consilience of inductions.” It makes a virtue of the plurality of definitions of species, arguing that the fact that they all coincide indicates that they are not arbitrary; what they pick out must be real.

      Neither of these proposals, however, has been universally accepted. Regarding species as super-organisms, it is not clear that they have the kind of internal organization necessary to be an individual. Also, the idea seems to have some paradoxical consequences. When an individual organism dies, for example, it is gone forever. Although one could imagine reconstructing it in some way, at best the result would be a duplicate, not the original organism itself. But can the same be said of a species? The Stegosaurus is extinct, but if a clone of a stegosaur were made from a fossilized sample of DNA, the species itself, not merely a duplicate of the species, would be created. Moreover, it is not clear how the notion of a scientific law applies to species conceived as individuals. On a more conventional understanding of species, one can talk of various scientific laws that apply to them, such as the law that species that break apart frequently into geographically isolated groups are more likely to speciate, or evolve into new species. But no scientific law applies only to a single individual. If the species Homo sapiens is an individual, therefore, no law applies to it. It follows that social science, which is concerned only with human beings, is impossible.

      Regarding the pluralist view, critics have pointed out that in fact the various definitions of species do not coincide very well. Consider, for example, the well-known phenomenon of sibling species, in which two or more morphologically very similar groups of organisms are nevertheless completely reproductively isolated (i.e., incapable of interbreeding). Is one to say that such species are not real?

      The fact that no current proposal is without serious difficulties has prompted some researchers to wonder whether the species problem is even solvable. This, in turn, raises the question of whether it is worth solving. Not a few critics have pointed out that it concerns only a very small subsection of the world's living organisms—the animals. Many plants have much looser reproductive barriers than animals do. And scientists who study microorganisms have pointed out that regularities regarding reproduction of macroorganisms often have little or no applicability in the world of the very small. Perhaps, therefore, philosophers of biology might occupy their thoughts and labours more profitably elsewhere.

      The modern method of classifying organisms was devised by Swedish biologist Carl von Linné, better known by his Latin name Carolus Linnaeus (Linnaeus, Carolus). He proposed a system of nested sets, with all organisms belonging to ever-more general sets, or “taxa,” at ever-higher levels, or “categories,” the higher-level sets including the members of several lower-level sets. There are seven basic categories, and each organism therefore belongs to at least seven taxa. At the highest category, kingdom, the wolf belongs to the taxon Animalia. At lower and more specific categories and taxa, it belongs to the phylum Chordata, the class Mammalia, the order Carnivora, the family Canidae, the genus Canis, and the species Canis lupus (or C. lupus).

      The advantage of a system like this is that a great deal of information can be packed into it. The classification of the wolf, for example, indicates that it has a backbone (Chordata), that it suckles its young (Mammalia), and that it is a meat eater (Carnivora). What it seems to omit is any explanation of why the various organisms are similar to or different from each other. Although the classification of dogs (C. familiaris) and wolves (C. lupus) shows that they are very much alike—they belong to the same genus and all higher categories—it is not obvious why this should be so. Although many researchers, starting with Linnaeus himself, speculated on this question, it was the triumph of Darwin to give the full answer: namely, dogs and wolves are similar because they have similar ancestral histories. Their histories are more similar to each other than either is to the history of any other mammalian species, such as Homo sapiens (human beings), which in turn is closer to the history of other chordate species, such as Passer domesticus (house sparrows). Thus, generally speaking, the taxa of the Linnaean system represent species of organisms whose histories are similar; and the more specific the taxon, the more similar the histories.

      During the years immediately following the publication of On the Origin of Species, there was intense speculation about ancestral histories, though with little reference to natural selection. Indeed, the mechanism of selection was considered to be in some respects an obstacle to understanding ancestry, since relatively recent adaptations could conceal commonalities of long standing. In contrast, there was much discussion of the alleged connections between paleontology and embryology, including the notorious and often very inaccurate biogenetic law proposed by the German zoologist Ernst Haeckel (Haeckel, Ernst) (1834–1919): ontogeny (the embryonic development of an individual) recapitulates phylogeny (the evolutionary history of a taxonomic group). With the development of the synthetic theory of evolution in the early 20th century, classification and phylogeny tracing ceased to be pursued for their own sake, but the theoretical and philosophical underpinnings of classification, known as systematics, became a topic of great interest.

      The second half of the 20th century was marked by a debate between three main schools. In the first, traditional evolutionary taxonomy, classification was intended to represent a maximum of evolutionary information. Generally this required that groupings be “monophyletic,” or based solely on shared evolutionary history, though exceptions could occur and were allowed. Crocodiles, for example, are evolutionarily closer to birds than to lizards, but they were classified with lizards rather than birds on the basis of physical and ecological similarity. (Groups with such mixed ancestry are called “paraphyletic.”) Obviously, the determination of exceptions could be quite subjective, and the practitioners of this school were open in calling taxonomy as much an art as a science.

      The second school was numerical, or phenetic, taxonomy. Here, in the name of objectivity, one simply counted common characters without respect to ancestry, and divisions were made on the basis of totals: the more characters in common, the closer the classification. The shared history of crocodiles and birds was simply irrelevant. Unfortunately, it soon appeared that objectivity is not quite so easily obtained. Apart from the fact that information that biologists might find important—such as ecological overlap—was ignored, the very notion of similarity required subjective decisions, and the same was even more true of the idea of a “character.” Is the fact that humans share four limbs with the horses to be taken as one character or four? Since shared ancestry was irrelevant to this approach, it was not clear why it should classify the extinct genus Eohippus (dawn horse), which had five digits, with the living genus Equus, which has only one. Why not with human beings, who also have five digits? The use of computers in the tabulation of common characters was and remains very important, but the need for a systematic theory behind the taxonomy was apparent.

      The third school, which has come to dominate contemporary systematics, is based on work by the German zoologist Willi Hennig (Hennig, Willi) (1913–76). Known as phylogenetic taxonomy, or cladism, this approach infers shared ancestry on the basis of uniquely shared historical (or derived) characteristics, called “synapomorphies.” Suppose, for example, that there is an original species marked by character A, and from this three species eventually evolve. The original species first breaks into two successor groups, in one of which A evolves into the character a; this successor group then breaks into two daughter groups, both of which have a. The other original successor group retains A throughout, with no further division. In this case, a is a synapomorphy, since the two species with a evolved from an ancestral species that had a uniquely. Therefore, the possessors of a must be classified more closely to each other than to the third species. Crocodiles and birds are classified together before they can be jointly linked to lizards.

      Both the theory and the practice of cladism raise a number of important philosophical issues (indeed, scientists explicitly turn to philosophy more frequently in this field than in any other in biology). At the practical level, how does one identify synapomorphies? Who is to say what is an original ancestral character and what a derived character? Traditional methods require one to turn to paleontology and embryology, and, although there are difficulties with these approaches—because of the incomplete record, can one be sure that one can truly say that something is derived?—they are both still used. Why does one classify Australopithecus africanus with Homo sapiens rather than with Gorilla gorilla—even though the brain sizes of the second and third are closer to each other than to the brain size of the first? Because the first and second share characters that evolved uniquely to them and not to gorillas (gorilla). The fossil known as Lucy, Australopithecus afarensis, shows that walking upright is a newly evolved trait, a synapomorphy, that is shared uniquely by Australopithecus and Homo sapiens.

      A more general method of identifying synapomorphies is the comparative method, in which one compares organisms against an out-group, which is known to be related to the organisms—but not as closely to them as they are to each other. If the out-group has character A, and, among three related species, two have character B and only one A, then B is a synapomorphy for the two species, and the species with A is less closely related.

      Clearly, however, a number of assumptions are being relied upon here, and critics have made much of them. How can one know that the out-group is in fact closely, but not too closely, related? Is there not an element of circularity at play here? The response to this charge is generally that there is indeed circularity, but it is not vicious. One assumes something is a suitable out-group and works with it, over many characters. If consistency obtains, then one continues. If contradictions start to appear (e.g., the supposed synapomorphies do not clearly delimit the species one is trying to classify), then one revises the assumptions about the out-group.

      Another criticism is that it is not clear how one knows that a shared character, in this case B, is indeed a synapomorphy. It could be that the feature independently evolved after the two species split—in traditional terminology, it is a “homoplasy” rather than a “homology”—in which case the assumption that B is indicative of ancestry would clearly be false. Cladists usually respond to this charge by appealing to simplicity. It is simpler to assume that shared characters tell of shared ancestry rather than that there was independent evolution to the same ends. They also have turned in force to the views of Karl Popper, who explained the theoretical virtue of simplicity in terms of falsifiability: all genuine scientific theories are falsifiable, and the simpler a theory is (other things equal), the more readily it can be falsified.

      Another apparent problem with cladism is that it seems incapable of capturing certain kinds of evolutionary relationships. First, if there is change within a group without speciation—a direct evolution of Homo habilis to Homo erectus, for example—then it would not be recorded in a cladistic analysis. Second, if a group splits into three daughter groups at the same time, this too would not be recorded, because the system works in a binary fashion, assuming that all change produces two and only two daughter groups.

      Some cladists have gone so far as to turn Hennig's theory on its head, arguing that cladistic analysis as such is not evolutionary at all. It simply reveals patterns, which in themselves do not represent trees of life. Although this “transformed” (or “pattern”) cladism has been much criticized (not least because it seems to support creationism, inasmuch as it makes no claims about the causes of the nature and distribution of organisms), in fact it is very much in the tradition of the phylogeny tracers of the early 20th century. Although those researchers were in fact all evolutionists (as are all transformed cladists), their techniques, as historians have pointed out, were developed in the first part of the 19th century by German taxonomists, most of whom entirely rejected evolutionary principles. The point is that a theory of systematics may not in methodology be particularly evolutionary, but this is not to say that its understanding or interpretation is not evolutionary through and through.

The structure of evolutionary theory
      Modern discussion of the structure of evolutionary theory was started by the American philosopher Morton O. Beckner (1928–2001), who argued that there are many more or less independent branches—including population genetics, paleontology, biogeography, systematics, anatomy, and embryology—which nevertheless are loosely bound together in a “net,” the conclusions of one branch serving as premises or insights in another. Assuming a hypothetico-deductive conception of theories and appealing to Darwin's intentions, the British philosopher Michael Ruse in the early 1970s claimed that evolutionary theory is in fact like a “fan,” with population genetics—the study of genetic variation and selection at the population level—at the top and the other branches spreading out below. The other branches are joined to each other primarily through their connection to population genetics, though they borrow and adapt conclusions, premises, and insights from each other. Population genetics, in other words, is part of the ultimate causal theory of all branches of evolutionary inquiry, which are thus brought together in a united whole.

      The kind of picture offered by Ruse has been challenged in two ways. The first questions the primacy of population genetics. Ruse himself allowed that in fact the formulators of the synthetic theory of evolution used population genetics in a very casual and nonformal way to achieve their ends. As an ornithologist and systematicist, Ernst Mayr, in his Systematics and the Origin of Species (1942), hardly thought of his work as deducible from the principles of genetics.

      The second challenge has been advanced by paleontologists, notably Stephen Jay Gould, who argue that population genetics is useful—indeed, all-important—for understanding relatively small-scale or short-term evolutionary changes but that it is incapable of yielding insight into large-scale or long-term ones, such as the Cambrian explosion (community ecology). One must turn to paleontology in its own right to explain these changes, which might well involve extinctions brought about by extraterrestrial forces (e.g., comets) or new kinds of selection operating only at levels higher than the individual organism (see above Levels of selection (nature, philosophy of)). Gould, together with fellow paleontologist Niles Eldredge, developed the theory of “punctuated equilibrium,” according to which evolution occurs in relatively brief periods of significant and rapid change followed by long periods of relative stability, or “stasis.” Such a view could never have been inferred from studies of small-scale or short-term evolutionary changes; the long-term perspective taken by paleontology is necessary. For Gould, therefore, Beckner's net metaphor would be closer to the truth.

      A separate challenge to the fan metaphor was directed at the hypothetico-deductive conception of scientific theories. Supporters of the “semantic” conception argue that scientific theories are rarely, if ever, hypothetico-deductive throughout and that in any case the universal laws presupposed by the hypothetico-deductive model are usually lacking. Especially in biology, any attempt to formulate generalities with anything like the necessity required of natural laws seems doomed to failure; there are always exceptions. Hence, rather than thinking of evolutionary theory as one unified structure grounded in major inductive generalizations, one should think of it (as one should think of all scientific theories) as being a cluster of models, formulated independently of experience and then applied to particular situations. The models are linked because they frequently use the same premises, but there is no formal requirement that this be so. Science—evolutionary theory in particular—is less like grand system building and more like motor mechanics. There are certain general ideas usually applicable in any situation, but, in the details and in getting things to work, one finds particular solutions to particular problems. Perhaps then the net metaphor, if not quite as Beckner conceived it, is a better picture of evolutionary theorizing than the fan metaphor. Perhaps an even better metaphor would be a mechanic's handbook, which would lay out basic strategies but demand unique solutions to unique problems.

Related fields
sociobiology and evolutionary psychology
      Darwin always understood that an animal's behaviour is as much a part of its repertoire in the struggle for existence as any of its physical adaptations. Indeed, he was particularly interested in social behaviour, because in certain respects it seemed to contradict his conception of the struggle as taking place between, and for the sole benefit of, individuals. As noted above, he was inclined to think that nests of social insects should be regarded as superorganisms rather than as groups of individuals engaged in cooperative or (at times) self-sacrificing, or altruistic, behaviour.

      In the century after the publication of On the Origin of Species the biological study of behaviour was slow to develop. In part this was because behaviour in itself is much more difficult to record and measure than physical characteristics. Experiment also is particularly difficult, for it is notoriously true that animals change their behaviours in artificial conditions. Another factor that hampered the study of behaviour was the rise of the social sciences in the early 20th century. Because these disciplines were overwhelmingly oriented toward behaviourism, which by and large restricted itself to the overt and observable, the biological and particularly evolutionary influences on behaviour tended to be discounted even before investigation was begun.

      An important dissenting tradition was represented by the European practitioners of ethology, who insisted from the 1920s that behaviour must be studied in a biological context. The development in the 1960s of evolutionary explanations of social behaviour in individualistic terms (see above Levels of selection (nature, philosophy of)) led to increased interest in social behaviour among evolutionary theorists and eventually to the emergence of a separate field devoted to its study, sociobiology, as well as to the growth of allied subdisciplines within psychology and philosophy. The basic ideas of the movement were formulated in Sociobiology: The New Synthesis (1975), by Edward O. Wilson (Wilson, Edward O.), and popularized in The Selfish Gene (1976), by Richard Dawkins.

      These works, Wilson's in particular, were highly controversial, mainly (though not exclusively) because the theories they propounded applied to humans. Having surveyed social behaviour in the animal world from the most primitive forms up to primates, Wilson argued that Homo sapiens is part of the evolutionary world in its behaviour and culture. Although he did allow that experience can have effects, the legacy of the genes, he argued, is much more important. In male-female relationships, in parent-child interactions, in morality, in religion, in warfare, in language, and in much else, biology matters crucially.

      Many philosophers and social scientists, notably Philip Kitcher, Richard Lewontin, and Stephen Jay Gould, rejected the new sociobiology with scorn. The claims of the sociobiologists were either false or unfalsifiable. Many of their conjectures had no more scientific substance than Rudyard Kipling's Just So Stories for children, such as How the Camel Got His Hump and How the Leopard Got His Spots. Indeed, their presumed genetic and evolutionary explanations of a wide variety of human behaviour and culture served in the end as justifications of the social status quo, with all its ills, including racism, sexism, homophobia, materialism, violence, and war. The title of Kitcher's critique of sociobiology, Vaulting Ambition, is an indication of the attitude that he and others took to the new science.

      Although there was some truth to these criticisms, sociobiologists since the 1970s have made concerted efforts to address them. In cases where the complaint had to do with falsifiability or testability, newly developed techniques of genetic testing have proved immensely helpful. Many sociobiological claims, for example, concern the behaviour of parents. One would expect that, in populations in which males compete for females and (as in the case of birds) also contribute toward the care of the young, the efforts of males in that regard would be tied to reproductive access and success. (In other words, a male who fathered four offspring would be expected to work twice as hard in caring for them as a male who fathered only two offspring.) Unfortunately, it was difficult, if not impossible, to verify paternity in studies of animal populations until the advent of genetic testing in the 1990s. Since then, sociobiological hypotheses regarding parenthood have been able to meet the standard of falsifiability insisted on by Popper and others, and in many cases they have turned out to be well-founded.

      Regarding social and ethical criticisms, sociobiologists by and large have had no significant social agendas, and most have been horrified at the misuse that has sometimes been made of their work. They stress with the critics that differences between the human races, for example, are far less significant than similarities, and in any case whatever differences there may be do not in themselves demonstrate that any particular race is superior or inferior to any other. Similarly, in response to criticism by feminists, sociobiologists have argued that merely pointing out genetically-based differences between males and females is not in itself sexist. Indeed, one might argue that not to recognize such differences can be morally wrong. If boys and girls mature at different rates, then insisting that they all be taught in the same ways could be wrong for both sexes. Likewise, the hypothesis that something like sexual orientation is under the control of the genes (and that there is a pertinent evolutionary history underlying its various forms) could help to undermine the view among some social conservatives that homosexuals deserve blame for “choosing” an immoral lifestyle.

      Moreover, it can be argued with some justice that “just so stories” in their own right are not necessarily a bad thing. In fact, as Popper himself emphasized, one might say that they are exactly the sort of thing that science needs in abundance—bold conjectures. It is when they are simply assumed as true without verification that they become problematic.

      In recent years, the sociobiological study of human beings has placed less emphasis on behaviour and more on the supposed mental faculties or properties on which behaviour is based. Such investigations, now generally referred to as “evolutionary psychology,” are still philosophically controversial, in part because it is notoriously difficult to specify the sense in which a mental property is innate and to determine which properties are innate and which are not. As discussed below, however, some philosophers have welcomed this development as providing a new conceptual resource with which to address basic issues in epistemology and ethics.

Evolutionary epistemology
      Because the evolutionary origins and development of the human brain must influence the nature, scope, and limits of the knowledge that human beings can acquire, it is natural to think that evolutionary theory should be relevant to epistemology, the philosophical study of knowledge. There are two major enterprises in the field known as “evolutionary epistemology”: one attempts to understand the growth of collective human knowledge by analogy with evolutionary processes in the natural world; the other attempts to identify aspects of human cognitive faculties with deep evolutionary roots and to explain their adaptive significance.

      The first project is not essentially connected with evolutionary theory, though as a matter of historical fact those who have adopted it have claimed to be Darwinians. It was first promoted by Darwin's self-styled “bulldog,” T.H. Huxley (Huxley, T.H.). He argued that, just as the natural world is governed by the struggle for existence, resulting in the survival of the fittest, so the world of knowledge and culture is directed by a similar process. Taking science as a paradigm of knowledge (now a nearly universal assumption among evolutionary epistemologists), he suggested that ideas and theories struggle against each other for adoption by being critically evaluated; the fittest among them survive, as those that are judged best are eventually adopted.

      In the 20th century the evolutionary model of knowledge production was bolstered by Popper's work in the philosophy of science. Popper argued that science—the best science, that is—confronts practical and conceptual problems by proposing daring and imaginative hypotheses, which are formulated in a “context of discovery” that is not wholly rational, involving social, psychological, and historical influences. These hypotheses are then pitted against each other in a process in which scientists attempt to show them false. This is the “context of justification,” which is purely rational. The hypotheses that remain are adopted, and they are accepted for as long as no falsifying evidence is uncovered.

      Critics of this project have argued that it overlooks a major disanalogy between the natural world and the world of knowledge and culture: whereas the mutations that result in adaptation are random—not in the sense of being uncaused but in the sense of being produced without regard to need—there is nothing similarly random about the processes through which new theories and ideas are produced, notwithstanding Popper's belittling of the “context of discovery.” Moreover, once a new idea is in circulation, it can be acquired without the need of anything analogous to biological reproduction. In the theory of Dawkins, such ideas, which he calls “memes,” are the cultural equivalent of genes.

      The second major project in evolutionary epistemology assumes that the human mind, no less than human physical characteristics, has been formed by natural selection and therefore reflects adaptation to general features of the physical environment. Of course, no one would argue that every aspect of human thinking must serve an evolutionary purpose. But the basic ingredients of cognition—including fundamental principles of deductive and inductive logic and of mathematics, the conception of the physical world in terms of cause and effect, and much else—have great adaptive value, and consequently they have become innate features of the mind. As the American philosopher Willard Van Orman Quine (Quine, Willard Van Orman) (1908–2000) observed, those proto-humans who mastered inductive inference (induction), enabling them to generalize appropriately from experience, survived and reproduced, and those who did not, did not. The innate human capacity for language use may also be viewed in these terms.

Evolutionary ethics
      In evolutionary ethics, as in evolutionary epistemology, there are two major undertakings. The first concerns normative ethics, which investigates what actions are morally right or morally wrong; the second concerns metaethics, or theoretical ethics, which considers the nature, scope, and origins of moral concepts and theories.

      The best known traditional form of evolutionary ethics is social Darwinism, though this view owes far more to Herbert Spencer (Spencer, Herbert) than it does to Darwin himself. It begins with the assumption that in the natural world the struggle for existence is good, because it leads to the evolution of animals that are better adapted to their environments. From this premise it concludes that in the social world a similar struggle for existence should take place, for similar reasons. Some social Darwinists have thought that the social struggle also should be physical—taking the form of warfare, for example. More commonly, however, they assumed that the struggle should be economic, involving competition between individuals and private businesses in a legal environment of laissez faire (laissez-faire). This was Spencer's own position.

      As might be expected, not all evolutionary theorists have agreed that natural selection implies the justice of laissez-faire capitalism. Alfred Russel Wallace, who advocated a group-selection analysis, believed in the justice of actions that promote the welfare of the state, even at the expense of the individual, especially in cases in which the individual is already well-favoured. The Russian theorist of anarchism Peter Kropotkin (Kropotkin, Peter Alekseyevich) (1842–1921) argued that selection proceeds through cooperation within groups (“mutual aid”) rather than through struggle between individuals. In the 20th century the English biologist Julian Huxley (Huxley, Sir Julian) (1887–1975), the grandson of T.H. Huxley, thought that the future survival of humankind, especially as the number of humans increases dramatically, would require the application of science and the undertaking of large-scale public works, such as the Tennessee Valley Authority. More recently, Edward O. Wilson has argued that, because human beings have evolved in symbiotic relationship with the rest of the living world, the supreme moral imperative is biodiversity.

      From a metaethical perspective, social Darwinism was famously criticized by the English philosopher G.E. Moore (Moore, G E) (1873–1958). Invoking a line of argument first mooted by the Scottish philosopher David Hume (Hume, David) (1711–76), who pointed out the fallaciousness of reasoning from statements of fact to statements of moral obligation (from an “is” to an “ought”), Moore accused the social Darwinists of committing what he called the “naturalistic fallacy,” the mistake of attempting to infer nonnatural properties (being morally good or right) from natural ones (the fact and processes of evolution). Evolutionary ethicists, however, were generally unmoved by this criticism, for they simply disagreed that deriving moral from nonmoral properties is always fallacious. Their confidence lay in their commitment to progress, to the belief that the products of evolution increase in moral value as the evolutionary process proceeds—from the simple to the complex, from the monad to the man, to use the traditional phrase. Another avenue of criticism of social Darwinism, therefore, was to deny that evolution is progressive in this way. T.H. Huxley pursued this line of attack, arguing that humans are imperfect in many of their biological properties and that what is morally right often contradicts humans' animal nature. In the late 20th century, Stephen Jay Gould made similar criticisms of attempts to derive moral precepts from the course of evolution.

      The chief metaethical project in evolutionary ethics is that of understanding morality, or the moral impulse in human beings, as an evolutionary adaptation. For all the intraspecific violence that human beings commit, they are a remarkably social species, and sociality, or the capacity for cooperation, is surely adaptively valuable, even on the assumption that selection takes place solely on the level of the individual. Unlike the social insects, human beings have too variable an environment and too few offspring (requiring too much parental care) to be hard-wired for specific cooperative tasks. On the other hand, the kind of cooperative behaviour that has contributed to the survival of the species would be difficult and time-consuming to achieve through self-interested calculation by each individual. Hence, something like morality is necessary to provide a natural impulse among all individuals to cooperation and respect for the interests of others.

      Although this perspective does not predict specific moral rules or values, it does suggest that some general concept of distributive justice (i.e., justice as fairness and equity) could have resulted from natural selection; this view, in fact, was endorsed by the American social and political philosopher John Rawls (Rawls, John) (1921–2002). It is important to note, however, that demonstrating the evolutionary origins of any aspect of human morality does not by itself establish that the aspect is rational or correct.

      An important issue in metaethics—perhaps the most important issue of all—is expressed in the question, “Why should I be moral?” What, if anything, makes it rational for an individual to behave morally (by cooperating with others) rather than purely selfishly? The present perspective suggests that moral behaviour did have an adaptive value for individuals or groups (or both) at some stages of human evolutionary history. Again, however, this fact does not imply a satisfactory answer to the moral skeptic, who claims that morality has no rational foundation whatsoever; from the premise that morality is natural or even adaptive, it does not follow that it is rational. Nevertheless, evolutionary ethics can help to explain the persistence and near-universality of the belief that there is more to morality than mere opinion, emotion, or habit. Hume pointed out that morality would not work unless people thought of it as “real” in some sense. In the same vein, many evolutionary ethicists have argued that the belief that morality is real, though rationally unjustified, serves to make morality work; therefore, it is adaptive. In this sense, morality may be an illusion that human beings are biologically compelled to embrace.

Social and ethical (ethics) issues
      One of the major developments in Anglo-American philosophy in the last three decades of the 20th century was a turn toward social issues in areas outside ethics and political philosophy, including the philosophy of biology. The logical positivists, with the notable exception of Karl Popper, did not think it appropriate for philosophers of science to engage in debate on social issues; this was the domain of preachers and politicians and the otherwise publicly committed. Today, in contrast, it is thought important—if not mandatory—for philosophers of science in general, and philosophers of biology in particular, to think beyond the strict limits of their discipline and to see what contributions they can make to issues of importance in the public domain.

      One of the first attempts at this kind of public philosophizing by philosophers of biology occurred in response to the development in the 1970s of techniques of recombinant DNA (recombinant DNA technology) (rDNA), which enabled, among other things, the insertion of genes from one or more species into host organisms of very different species. There was much concern that such experiments would lead to the fabrication of monsters. Others worried about the threats that could be posed to humankind and the environment by genetically mixed or modified organisms. Even worse was the possibility that the techniques could be used by despots to manufacture biological weapons (biological weapon) cheaply and quickly.

      It soon became evident, however, that much of this concern was the result of ignorance, even on the part of biologists. Epidemiologists, for example, demonstrated that the dangers that rDNA research could pose to human populations were much overblown. But there were still (and remain) issues of considerable interest. Echoing a traditional position in evolutionary ethics (nature, philosophy of), opponents claimed that rDNA techniques must be unethical because they contravene the “wisdom of the genes.” Something that nature has wrought must be good and should not be lightly discarded or altered by human technology. But although there are obviously important thoughts included in this line of critique—if one does alter nature, then too often unexpected and unwanted results obtain—the simple appeal to nature or to evolution shows very little (as critics of social Darwinism have long maintained). To revert to the position of T.H. Huxley, often what should be done is exactly the opposite of what evolution has done. Sickle-cell anemia, for example, comes about as the result of a genetic, evolutionarily promoted defense against anemia. Is the attempt to cure sickle-cell anemia therefore morally wrong?

      The 1990s were marked by increasing development and application of the techniques of molecular biology. The major scientific-technological undertaking of the decade was of course the Human Genome Project (HGP), which aimed to map the entire human genetic makeup; the initial sequencing of the genome was completed in 2000. The success of the HGP raised important social and ethical issues, particularly regarding the effects of prejudice. Suppose that the genes associated with an inherited disease—such as Huntington disease, which leads to insanity and early death—are identified. Should a healthy person who carries these genes be denied medical insurance? If not, should private, for-profit insurance companies be required to insure such people, or should the state assume the obligation?

      Other issues have arisen in connection with cloning (clone) and stem cell research. Various religious and conservative groups take extreme objection to the manipulation of reproductive cells, whether for the end of producing new human beings (or other animals) or for the end of aiding already existing ones. The American bioethicist Leon Kass, for example, argues that any attempt to change or direct the natural reproductive processes is morally wrong, because it is an essential part of the human condition to accept whatever nature produces, however inconvenient or unpleasant it may be.

      There are epistemological as well as ethical issues at stake here. How exactly should cloning be defined? Is it wrong (or not wrong) in itself, or only by virtue of its consequences? What about identical twins, who seem to be the result of natural cloning, without human aid? Should one think that, because they are not unique, they are in some sense less worthy as human beings? Or does environment and training make them, and any other clone, unique anyway?

      It is often thought that differences in moral intuitions regarding these questions stem from the rivalry between the utilitarian and Kantian ethical traditions—the former judging actions in terms of their consequences, in particular the amount of happiness they tend to promote, the latter stressing good intentions and the importance of treating people as ends rather than as means. Conventionally, then, utilitarians are thought to favour cloning and stem cell research, and Kantians are assumed to oppose it. The divisions are not quite this neat, however, since some utilitarians think that modern applications of molecular biology may do more harm than good, and some Kantians think that such applications are well motivated and treat the individuals they are designed to help as ends and not as means.

      The introduction of genetically modified (GM) foods, chiefly plants, in the 1990s provoked a violent and complex debate involving agricultural and pharmaceutical corporations; scientists; environmental, consumer, and public health organizations; and representatives of indigenous and farming communities in the developing world. Proponents, largely in the United States (where GM foods are widely used), argued that the use of crops that have been genetically modified to resist various pests or diseases can significantly increase harvests and decrease dependence on pesticides that are poisonous to human beings. Opponents contend that genetically modified plant species may create catastrophic changes in the ecosystems in which they are introduced or to which they may travel and that the long-term health effects of consuming GM foods are unknown. Also, if major firms in the West succeed in patenting such genetic modifications, the independence or self-sufficiency of farming communities in the developing world could be undermined. Consideration of many of these issues can be usefully informed by philosophical analysis. Indeed, some of the theoretical discussions covered in earlier sections of this article are directly relevant. How does one define an organism or a species? When is something that has been changed artificially no longer truly what it was? Is function most significant? Is changing an adaptation of a species more important than changing simply a by-product—a “spandrel”?

      There are also interesting and as-yet-little-discussed questions about balance and equilibrium in analyses of organisms in their native habitats. The ancient idea of a balance of nature has deep roots in Christian theology. But it has been transported—some would say with little change—into modern thinking about equilibrium in nature. Are these modern claims—for example, the well-known theorizing of Robert MacArthur and Edward O. Wilson regarding the balancing effects of immigration, emigration, and extinction on islands—genuinely empirical assertions, or are they, as some critics claim, so vacuous as to be little more than tautologies?

      Obviously, answers to questions such as these have important implications in areas far removed from purely theoretical aspects of the philosophy of biology. This application of the discipline to social and ethical issues of public concern should be acknowledged and welcomed. On the other hand, no one would attempt to justify the philosophy of biology solely on the basis of its practical application and relevance. In its own right, it is one of the most vibrant, innovative, and exciting fields of contemporary philosophy.

Michael Ruse

Additional Reading

Philosophy of physics
R. Harre, “Philosophical Aspects of Cosmology,” and W. Davidson, “Philosophical Aspects of Cosmology,” Br. J. Phil. Sci., 13:104–119, 120–129 (1962); Othmar Spann, Naturphilosophie, 2nd ed. (1963); Ivor Leclerc, The Nature of Physical Existence (1972); Pascual Jordan, Albert Einstein: Sein Lebenswerk und die Zukunft der Physik (1969), and Atom und Weltall: Einführung in den Gedankeninhalt der modernen Physik, 2nd ed. (1960); E.T. Whittaker, From Euclid to Eddington: A Study of Conceptions of the External World (1949); Jacques Merleau-Ponty and Bruno Morando, Les Trois Étapes de la cosmologie (1971; Eng. trans., The Rebirth of Cosmology, 1975); Louis de Broglie, La Physique nouvelle et les quanta (1937; Eng. trans., The Revolution in Physics, 1953, reprinted 1969); R.G. Collingwood, The Idea of Nature (1945, reprinted 1960); P.K. Feyerabend, “Philosophie de la nature,” in M.F. Sciacca (ed.), Les Grands Courants de la pensée mondiale contemporaine, part. 2, vol. 2, pp. 901–927 (1961); Errol E. Harris, The Foundations of Metaphysics in Science (1965); Jagjit Singh, Great Ideas and Theories of Modern Cosmology, rev. and enlarged ed. (1970); P.A. Schilpp (ed.), Albert Einstein: Philosopher–Scientist, 2 vol. (1959); Philipp Frank, Philosophy of Science (1957); Alfred North Whitehead, Process and Reality (1929, corrected ed. 1978), and The Concept of Nature (1920, reprinted 1964); Mary B. Hesse, Forces and Fields (1961); Carl Friedrich von Weizsaecker, Zum Weltbild der Physik, 12th ed. (1976); Adolph Gruenbaum, Philosophical Problems of Space and Time, 2nd enlarged ed. (1973); Werner Heisenberg, Physics and Philosophy (1958); A.S. Eddington, The Nature of the Physical World (1928, reprinted 1958); Henri Poincare, La Science et l'hypothèse (1903; Eng. trans., Science and Hypothesis, 1905); Henry Margenau, The Nature of Physical Reality (1950, reprinted 1977), and Physics and Philosophy (1978). Karl R. Popper, Realism and the Aim of Science (1983), The Open Universe: An Argument for Indeterminism (1982), and Quantum Theory and the Schism in Physics (1982), comprise the work of one of the greatest philosophers of the 20th century, who challenges most of the traditional assumptions in modern physics; Jeremy Bernstein, Science Observed: Essays of My Mind (1982), is a collection of essays on the process of science; Roger S. Jones, Physics as Metaphor (1982) is an original personal commentary on the nature of physical science; Heinz R. Pagels, The Cosmic Code: Quantum Physics as the Language of Nature (1982), is a review of modern physics; see also Fritjof Capra, The Tao of Physics: An Exploration of the Parallels Between Modern Physics and Eastern Mysticism, 2nd rev. ed. (1983); Benjamin Gal-Or, Cosmology, Physics, and Philosophy (1983); Edward M. MacKinnon, Scientific Explanation and Atomic Physics (1982); and P.C.W. Davies, The Accidental Universe (1982).

Philosophy of biology
General studies
Good basic surveys include David L. Hull and Michael Ruse (eds.), The Philosophy of Biology (1998); Elliott Sober (ed.), Conceptual Issues in Evolutionary Biology, 3rd ed. (2006); and Michael Ruse, The Oxford Handbook of Philosophy of Biology (2008). Kim Sterelny and Paul E. Griffiths, Sex and Death: An Introduction to Philosophy of Biology (1999); and Elliott Sober, Philosophy of Biology, 2nd ed. (2000), are good introductory texts.

Classic texts in the philosophy of biology are Henri Bergson, Creative Evolution, trans. by Arthur Mitchell (1911, reissued 2006; originally published in French, 1907); E.S. Russell, Form and Function: A Contribution to the History of Animal Morphology (1916, reprinted 1982); and J.H. Woodger, Biology and Language: An Introduction to the Methodology of the Biological Sciences, Including Medicine (1952). Valuable secondary sources include Allan Gotthelf and James G. Lennox (eds.), Philosophical Issues in Aristotle's Biology (1987); John Losee, A Historical Introduction to the Philosophy of Science, 4th ed. (2001); and Michael Ruse, Darwin and Design: Does Evolution Have a Purpose? (2003).

Outstanding works on this subject are Elliott Sober, The Nature of Selection: Evolutionary Theory in Philosophical Focus (1984, reissued 1993), and From a Biological Point of View: Essays in Evolutionary Philosophy (1994), both considered modern classics; Robert N. Brandon, Adaptation and Environment (1990); Lorenz Krüger, Lorraine J. Daston, and Michael Heidelberger (eds.), Ideas in History, vol. 1 of The Probabilistic Revolution (1987); and Lorenz Krüger, Gerd Gigerenzer, and Mary S. Morgan (eds.), Ideas in the Sciences, vol. 2 of The Probabilistic Revolution (1987).

Levels of selection
A good historical introduction to this topic is Robert N. Brandon and Richard M. Burian (eds.), Genes, Organisms, Populations: Controversies over the Units of Selection (1984). The classic work is Richard Dawkins, The Selfish Gene (1976, reissued 2006). Further discussion can be found in George C. Williams, Adaptation and Natural Selection: A Critique of Some Current Evolutionary Thought (1966, reissued 1996); Elliott Sober and David Sloan Wilson, “A Critical Review of Philosophical Work on the Units of Selection Problem,” Philosophy of Science, 61(4):534–555 (December 1994); and Stephen Jay Gould, The Structure of Evolutionary Theory (2002).

Testing
General works in the philosophy of science deal extensively with the issue of testing. More specialized works looking at biology include Karl Popper, Unended Quest: An Intellectual Autobiography, rev. ed. (1976, reissued 2002); George Oster and Edward O. Wilson, Caste and Ecology in the Social Insects (1978); J. Maynard Smith, “Optimization Theory in Evolution,” Annual Review of Ecology and Systematics, 9:31–56 (1978); John Dupré (ed.), The Latest on the Best: Essays on Evolution and Optimality (1987); Steven Hecht Orzack and Elliott Sober (eds.), Adaptationism and Optimality (2001); and Michael R. Rose and George V. Lauder (eds.), Adaptation (1996).

Good overall introductions include David L. Hull, Philosophy of Biological Science (1974); Sahotra Sarkar, Genetics and Reductionism (1988); and Alexander Rosenberg, Darwinism in Philosophy, Social Science, and Policy (2000).

Form and function
Brian C. Goodwin, How the Leopard Changed Its Spots: The Evolution of Complexity (1994, reissued 2001); and Sean B. Carroll, Jennifer K. Grenier, and Scott D. Weatherbee, From DNA to Diversity: Molecular Genetics and the Evolution of Animal Design, 2nd ed. (2005), provide fine surveys of the field. A classic article is Stephen Jay Gould and R.C. Lewontin, “The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme,” Proceedings of the Royal Society of London, Series B, Biological Sciences, 205(1161):581–598 (Sept. 21, 1979). Also important are J. Maynard Smith et al., “Developmental Constraints and Evolution,” The Quarterly Review of Biology, 60(3):265–287 (September 1985); and Rudolf A. Raff, The Shape of Life: Genes, Development, and the Evolution of Animal Form (1996). Historical perspective is provided in D'Arcy Wentworth Thompson, On Growth and Form, rev. ed., 2 vol. (1942, reissued 1992), also available in an abridged 1 vol. edition, ed. by John Tyler Bonner (1961, reissued 1992).

A well-rounded collection of essays on this topic is Colin Allen, Marc Bekoff, and George Lauder, Nature's Purposes: Analyses of Function and Design in Biology (1988). Background discussion is provided in J. Beatty, “Teleology and the Relationship Between Biology and the Physical Sciences in the Nineteenth and Twentieth Centuries,” in Frank Durham and Robert D. Purrington (eds.), Some Truer Method: Reflections on the Heritage of Newton (1990), pp. 113–144. Also useful is David J. Buller (ed.), Function, Selection, and Design: Philosophical Essays (1999).

The species problem
The classic discussions are Ernst Mayr, Toward a New Philosophy of Biology: Observations of an Evolutionist (1988); David Hull, The Metaphysics of Evolution (1989); and Michael Ghiselin, “A Radical Solution to the Species Problem,” in Marc Ereshefsky (ed.), The Units of Evolution: Essays on the Nature of Species (1992), pp. 279–292.

Sociobiology and evolutionary psychology
Works critical of sociobiology include Philip Kitcher, Vaulting Ambition: Sociobiology and the Quest for Human Nature (1985); R.C. Lewontin, Biology as Ideology: The Doctrine of DNA (1991; also published as The Doctrine of DNA, 1993); and John Dupré, Humans and Other Animals (2002). Positive treatments are Michael Ruse, Sociobiology: Sense or Nonsense?, 2nd ed. (1985); Steven Pinker, How the Mind Works (1997); Sarah Blaffer Hrdy, Mother Nature: A History of Mothers, Infants, and Natural Selection (1999); and the work that established the field and the controversy, Edward O. Wilson, Sociobiology: The New Synthesis (1975, reissued 2000).

Evolutionary epistemology
Good works in this field include Stephen Toulmin, Human Understanding, vol. 1, The Collective Use and Evolution of Concepts (1972); Konrad Lorenz, “Kant's Doctrine of the A Priori in the Light of Contemporary Biology,” in H.C. Plotkin (ed.), Learning, Development, and Culture: Essays in Evolutionary Epistemology (1982), pp. 121–143, an essay originally published in German in 1941; Michael Ruse, Taking Darwin Seriously: A Naturalistic Approach to Philosophy (1986, reissued 1998); Robert J. Richards, Darwin and the Emergence of Evolutionary Theories of Mind and Behavior (1987); David L. Hull, Science as a Process: An Evolutionary Account of the Social and Conceptual Development of Science (1988); and Richard Creath and Jane Maienschein (eds.), Biology and Epistemology (2000).

Evolutionary ethics
Representative treatments of this field include Edward O. Wilson, On Human Nature (1978, reissued with a new preface, 2004); Michael Bradie, The Secret Chain: Evolution and Ethics (1994); Paul Thompson (ed.), Issues in Evolutionary Ethics (1995); Brian Skyrms, Evolution of the Social Contract (1998); and Jane Maienschein and Michael Ruse (eds.), Biology and the Foundation of Ethics (1999).Michael Ruse

* * *


Universalium. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Nature (philosophy) — Nature is a concept with two major sets of inter related meanings, referring on the one hand to the things which are natural, or subject to the normal working of laws of nature , or on the other hand to the essential properties and causes of… …   Wikipedia

  • nature philosophy — noun : natural philosophy; especially : an ancient Grecian and Renaissance philosophy undertaking to explain phenomena by natural causes and without recourse to mythical beings …   Useful english dictionary

  • nature worship — nature worshiper. 1. a system of religion based on the deification and worship of natural forces and phenomena. 2. love of nature. [1865 70] * * * ▪ religion Introduction       system of religion based on the veneration of natural phenomena for… …   Universalium

  • nature — An indefinitely mutable term, changing as our scientific conception of the world changes, and often best seen as signifying a contrast with something considered not part of nature. The term applies both to individual species (it is the nature of… …   Philosophy dictionary

  • Nature (innate) — Nature is innate behavior (behavior not learned or influenced by the environment), character or essence, especially of a human. This is a way of using the word nature which goes back to its earliest forms in Greek. See Nature (Philosophy). Nature …   Wikipedia

  • Nature (disambiguation) — Nature may refer to: Nature, the phenomena of the physical world, and also to life in general Nature (philosophy) Mother Nature, the personification of nature as a maternal figure Nature (innate), the innate behaviour, character or essence of a… …   Wikipedia

  • philosophy, Western — Introduction       history of Western philosophy from its development among the ancient Greeks to the present.       This article has three basic purposes: (1) to provide an overview of the history of philosophy in the West, (2) to relate… …   Universalium

  • Nature — This article is about the physical universe. For other uses, see Nature (disambiguation). Natural and Natural World redirect here. For other uses, see Natural (disambiguation). See also: Natural environment Hopetoun Falls, Australia …   Wikipedia

  • philosophy — /fi los euh fee/, n., pl. philosophies. 1. the rational investigation of the truths and principles of being, knowledge, or conduct. 2. any of the three branches, namely natural philosophy, moral philosophy, and metaphysical philosophy, that are… …   Universalium

  • Philosophy — • Detailed article on the history of the love of wisdom Catholic Encyclopedia. Kevin Knight. 2006. Philosophy     Philosophy     † …   Catholic encyclopedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”