cosmos

cosmos
/koz"meuhs, -mohs/, n., pl. cosmos, cosmoses for 2, 4.
1. the world or universe regarded as an orderly, harmonious system.
2. a complete, orderly, harmonious system.
3. order; harmony.
4. any composite plant of the genus Cosmos, of tropical America, some species of which, as C. bipannatus and C. sulphureus, are cultivated for their showy ray flowers.
5. Also, Kosmos. (cap.) Aerospace. one of a long series of Soviet satellites that have been launched into orbit around the earth.
[1150-1200; ME < Gk kósmos order, form, arrangement, the world or universe]

* * *

Any of the garden plants that make up the genus Cosmos (composite family), containing about 20 species native to the tropical New World.

Heads of flowers are borne along long flower stalks or together in an open cluster. The disk flowers are red or yellow; the ray flowers, sometimes notched, may be white, pink, red, purple, or other colors. Most annual ornamental varieties have been developed from the common garden cosmos (C. bipinnatus).

* * *

▪ Soviet satellite
      any of a series of unmanned Soviet satellites launched from the early 1960s. Cosmos satellites were used for a wide variety of purposes, including scientific research, navigation, and military reconnaissance. Cosmos 26 and 49 (both launched in 1964), for example, were equipped to measure the Earth's magnetic field. Others were employed to study certain technical aspects of spaceflight as well as physical phenomena in the Earth's upper atmosphere and in deep space. A number of them such as Cosmos 597, 600, and 602 were apparently used to collect intelligence information on the Yom Kippur War between the Arab states and Israel in October 1973. Some Cosmos spacecraft may have had the ability to intercept satellites launched by other nations.

Introduction

      in astronomy, the entire physical universe consisting of all objects and phenomena observed or postulated.

      If one looks up on a clear night, one sees that the sky is full of stars. During the summer months in the Northern Hemisphere, a faint band of light stretches from horizon to horizon, a swath of pale white cutting across a background of deepest black. For the early Egyptians, this was the heavenly Nile, flowing through the land of the dead ruled by Osiris. The ancient Greeks likened it to a river of milk. Astronomers now know that the band is actually composed of countless stars in a flattened disk seen edge on. The stars are so close to one another along the line of sight that the unaided eye has difficulty discerning the individual members. Through a large telescope, astronomers find myriads of like systems sprinkled throughout the depths of space. They call such vast collections of stars galaxies, after the Greek word for milk, and call the local galaxy to which the Sun belongs the Milky Way Galaxy or simply the Galaxy.

      Every visible star is a sun in its own right. Ever since this realization first dawned in the collective mind of humanity, it has been speculated that many stars other than the Sun also have planetary systems encircling them. The related issue of the origin of the solar system, too, has always had special fascination for speculative thinkers, and the quest to understand it on a firm scientific basis has continued into the present day.

      Some stars are intrinsically brighter than the Sun; others, fainter. Much less light is received from the stars than from the Sun because the stars are all much farther away. Indeed, they appear densely packed in the Milky Way only because there are so many of them. The actual separations of the stars are enormous, so large that it is conventional to measure their distances in units of how far light can travel in a given amount of time. The speed of light (in a vacuum) equals 3 × 1010 cm/sec (centimetres per second); at such a speed, it is possible to circle the Earth seven times in a single second. Thus in terrestrial terms the Sun, which lies 500 light-seconds from the Earth, is very far away; however, even the next closest star, Proxima Centauri, at a distance of 4.3 light-years (4.1 × 1018 cm), is 270,000 times farther yet. The stars that lie on the opposite side of the Milky Way from the Sun have distances that are on the order of 100,000 light-years, which is the typical diameter of a large spiral galaxy.

      If the kingdom of the stars seems vast, the realm of the galaxies is larger still. The nearest galaxies to the Milky Way system are the Large and Small Magellanic Clouds, two irregular satellites of the Galaxy visible to the naked eye in the Southern Hemisphere. The Magellanic Clouds are relatively small (containing roughly 109 stars) compared to the Galaxy (with some 1011 stars), and they lie at a distance of about 200,000 light-years. The nearest large galaxy comparable to the Galaxy is the Andromeda galaxy (also called M31 because it was the 31st entry in a catalog of astronomical objects compiled by the French astronomer Charles Messier in 1781), and it lies at a distance of about 2,000,000 light-years. The Magellanic Clouds, the Andromeda galaxy, and the Milky Way system all are part of an aggregation of two dozen or so neighbouring galaxies known as the Local Group. The Galaxy and M31 are the largest members of this group.

      The Galaxy and M31 are both spiral galaxies, and they are among the brighter and more massive of all spiral galaxies. The most luminous and brightest galaxies, however, are not spirals but rather supergiant ellipticals (also called cD galaxies by astronomers for historical reasons that are not particularly illuminating). Elliptical galaxies have roundish shapes rather than the flattened distributions that characterize spiral galaxies, and they tend to occur in rich clusters (those containing thousands of members) rather than in the loose groups favoured by spirals.

      The brightest member galaxies of rich clusters have been detected at distances exceeding several thousand million light-years from the Earth. The branch of learning that deals with phenomena at the scale of many millions of light-years is called cosmology—a term derived from combining two Greek words, kosmos, meaning “order,” “harmony,” and “the world,” and logos, signifying “word” or “discourse.” Cosmology is, in effect, the study of the universe at large. A dramatic new feature, not present on small scales, emerges when the universe is viewed in the large—namely, the cosmological expansion. On cosmological scales, galaxies (or, at least, clusters of galaxies) appear to be racing away from one another with the apparent velocity of recession being linearly proportional to the distance of the object. This relation is known as the Hubble law (after its discoverer, the American astronomer Edwin Powell Hubble). Interpreted in the simplest fashion, the Hubble law implies that roughly 1010 years ago, all of the matter in the universe was closely packed together in an incredibly dense state and that everything then exploded in a “big bang,” the signature of the explosion being written eventually in the galaxies of stars that formed out of the expanding debris of matter. Strong scientific support for this interpretation of a big bang origin of the universe comes from the detection by radio telescopes of a steady and uniform background of microwave radiation. The cosmic microwave background is believed to be a ghostly remnant of the fierce light of the primeval fireball reduced by cosmic expansion to a shadow of its former splendour but still pervading every corner of the known universe.

      The simple (and most common) interpretation of the Hubble law as a recession of the galaxies over time through space, however, contains a misleading notion. In a sense, as will be made more precise later in the article, the expansion of the universe represents not so much a fundamental motion of galaxies within a framework of absolute time and absolute space, but an expansion of time and space themselves. On cosmological scales, the use of light-travel times to measure distances assumes a special significance because the lengths become so vast that even light, traveling at the fastest speed attainable by any physical entity, takes a significant fraction of the age of the universe, roughly 1010 years, to travel from an object to an observer. Thus, when astronomers measure objects at cosmological distances from the Local Group, they are seeing the objects as they existed during a time when the universe was much younger than it is today. Under these circumstances, Albert Einstein taught in his theory of general relativity that the gravitational field of everything in the universe so warps space and time as to require a very careful reevaluation of quantities whose seemingly elementary natures are normally taken for granted.

      The observed expansion of the universe immediately raises the spectre that the universe is evolving, that it had a beginning and will have an end. The steady state alternative, postulated by a British school of cosmologists in 1948, is no longer considered viable by most astronomers. Yet, the notion that the Cosmos had a beginning, while common in many theologies, raises deep and puzzling questions for science, for it implies a creation event—a creation not only of all the mass-energy that now exists in the universe but also perhaps of space-time itself.

      The issue of how the universe will end seems, at first sight, more amenable to conventional analysis. Because the universe is currently expanding, one may ask whether this expansion will continue into the indefinite future or whether after the passage of some finite time, the expansion will be reversed by the gravitational attraction of all of the matter for itself. The procedure for answering this question seems straightforward: either measure directly the rate of deceleration in the expansion of the galaxies to extrapolate whether they will eventually come to a halt, or measure the total amount of matter in the universe to see if there is enough to supply the gravitation needed to make the universe bound. Unfortunately, astronomers' assaults on both fronts have been stymied by two unforeseen circumstances. First, it is now conceded that earlier attempts to measure the deceleration rate have been affected by evolutionary effects of unknown magnitude in the observed galaxies that invalidate the simple interpretations. Second, it is recognized that within the Cosmos there may be an unknown amount of “hidden mass,” which cannot be seen by conventional astronomical techniques but which contributes substantially to the gravitation of the universe.

      The hope is that, somehow, quantum physics will ultimately supply theoretical answers (which can then be tested observationally and experimentally) to each of these difficulties. The ongoing effort in particle physics to find a unified basis for all the elementary forces of nature has yielded promising new ways to think about the most fundamental of all questions regarding astronomical origins; it has offered a tentative prediction concerning the deceleration rate of the universe; and it has offered a plethora of candidates for the hidden mass of the universe.

      This article traces the development of modern conceptions of the Cosmos and summarizes the prevailing theories of its origin and evolution. Humanity has traveled a long road since self-centred societies imagined the creation of the Earth, the Sun, and the Moon as the main act, with the formation of the rest of the universe as almost an afterthought. Today it is known that the Earth is only a small ball of rock in a Cosmos of unimaginable vastness and that the birth of the solar system was probably only one event among many that occurred against the backdrop of an already mature universe. Yet, as humbling as the lesson has been, it has also unveiled a remarkable fact, one that endows the minutest particle in this universe with a rich and noble heritage. Events hypothesized to have occurred in the first few minutes of the creation of the universe turn out to have had profound influence on the birth, life, and death of galaxies, stars, and planets. Indeed, there is a direct, though tortuous, lineage from the forging of the matter of the universe in a primal furnace of incredible heat and light to the gathering on Earth of atoms versatile enough to serve as a chemical basis of life. The intrinsic harmony of the resultant worldview has great philosophical and aesthetic appeal and perhaps explains the resurgence of public interest in this subject.

      For detailed information on the structure and evolution of the major components of the Cosmos, see galaxy; star; star cluster; astronomical map; nebula; and solar system. The present article considers only aspects of these topics that satisfy one of three criteria: (1) they bear on the general issue of astronomical origins; (2) they are important to an integrated picture of how the universe evolved; or (3) they play a big role in forming humanity's growing vision of the miraculous unity that is the Cosmos.

History of humanity's perception of the universe

Earliest conceptions (astronomy)
      All scientific thinking on the nature of the Cosmos can be traced to the distinctive geometric patterns formed by the stars in the night sky. Even prehistoric people must have noticed that, apart from a daily rotation (which is now understood to arise from the spin of the Earth), the stars did not seem to move with respect to one another: the stars appear “fixed.” Early nomads found that knowledge of the constellations (constellation) could guide their travels, and they developed stories to help them remember the relative positions of the stars in the night sky. These stories became the mythical tales that are part of most cultures.

      When nomads turned to farming, an intimate knowledge of the constellations served a new function—an aid in timekeeping, in particular for keeping track of the seasons. People had noticed very early that certain celestial objects did not remain stationary relative to the “fixed” stars; instead, during the course of a year, they moved forward and backward in a narrow strip of the sky that contained 12 constellations constituting the signs of the zodiac. Seven such wanderers were known to the ancients: the Sun, Moon, Mercury, Venus, Mars, Jupiter, and Saturn. Foremost among the wanderers was the Sun: day and night came with its rising and setting, and its motion through the zodiac signaled the season to plant and the season to reap. Next in importance was the Moon: its position correlated with the tides and its shape changed intriguingly over the course of a month. The Sun and Moon had the power of gods; why not then the other wanderers? Thus probably arose the astrological belief that the positions of the planets (from the Greek word planetes, “wanderers”) in the zodiac could influence worldly events and even cause the rise and fall of kings. In homage to this belief, Babylonian priests devised the week of seven days, whose names even in various modern languages (for example, English, French, or Norwegian) can still easily be traced to their origins in the seven planet-gods.

Astronomical theories of the ancient Greeks
 The apex in the description of planetary motions during classical antiquity was reached with the Greeks, who were of course superb geometers. Like their predecessors, Greek astronomers adopted the natural picture, from the point of view of an observer on Earth, that the Earth lay motionless at the centre of a rigidly rotating celestial sphere (to which the stars were “fixed”), and that the complex to-and-fro wanderings of the planets in the zodiac were to be described against this unchanging backdrop. They developed an epicyclic model that would reproduce the observed planetary motions with quite astonishing accuracy. The model invoked small circles on top of large circles, all rotating at individual uniform speeds, and it culminated about AD 140 with the work of Ptolemy, who introduced the ingenious artifact of displaced centres for the circles to improve the empirical fit. (See Ptolemy's theory of the solar system—>.) Although the model (Ptolemaic system) was purely kinematic and did not attempt to address the dynamical reasons for why the motions were as they were, it laid the groundwork for the paradigm that nature is not capricious but possesses a regularity and precision that can be discovered from experience and used to predict future events.

      The application of the methods of Euclidean geometry to planetary astronomy by the Greeks led to other schools of thought as well. Pythagoras (c. 570–? BC), for example, argued that the world could be understood on rational principles (“all things are numbers”); that it was made of four elements—earth, water, air, and fire; that the Earth was a sphere; and that the Moon shone by reflected light. In the 4th century BC Heracleides (Heracleides Ponticus), a follower of Pythagoras, taught that the spherical Earth rotated freely in space and that Mercury and Venus revolved about the Sun. From the different lengths of shadows cast in Syene and Alexandria at noon on the first day of summer, Eratosthenes (c. 276–194 BC) computed the radius of the Earth to an accuracy within 20 percent of the modern value. Starting with the size of the Earth's shadow cast on the Moon during a lunar eclipse, Aristarchus Of Samos (c. 310–230 BC) calculated the linear size of the Moon relative to the Earth. From its measured angular size, he then obtained the distance to the Moon. He also proposed a clever scheme to measure the size and distance of the Sun. Although flawed, the method did enable him to deduce that the Sun is much larger than the Earth. This deduction led Aristarchus to speculate that the Earth revolves about the Sun rather than the other way around.

      Unfortunately, except for the conception that the Earth is a sphere (inferred from the Earth's shadow on the Moon always being circular during a lunar eclipse), these ideas failed to gain general acceptance. The precise reasons remain unclear, but the growing separation between the empirical and aesthetic branches of learning must have played a major role. The unparalleled numerical accuracy achieved by the theory of epicyclic motions for planetary motions lent great empirical validity to the Ptolemaic system. Henceforth, such computational matters could be left to practical astronomers without the necessity of having to ascertain the physical reality of the model. Instead, absolute truth was to be sought through the Platonic ideal of pure thought. Even the Pythagoreans fell into this trap; the depths to which they eventually sank may be judged from the story that they discovered and then tried to conceal the fact that the square root of 2 is an irrational number (i.e., cannot be expressed as a ratio of two integers).

The system of Aristotle and its impact on medieval thought
 The systematic application of pure reason to the explanation of natural phenomena reached its extreme development with Aristotle (384–322 BC), whose great system of the world later came to be regarded as the synthesis of all worthwhile knowledge. (See Aristotle's theory of the solar system—>.) Aristotle argued that humans could not inhabit a moving and rotating Earth without violating commonsense perceptions. Moreover, in his theory of impetus, all terrestrial motion, presumably including that of the Earth itself, would grind to a halt without the continued application of force. He took for granted the action of friction because he would not allow the seminal idealization of a body moving through a void (“nature abhors a vacuum”). Thus, Aristotle was misled into equating force with velocity rather than, as Sir Isaac Newton was to show much later, with (mass times) acceleration. Celestial objects were exempt from dynamical decay because they moved in a higher stratum whereby a perfect sphere was the natural shape of heavenly bodies and uniform rotation in circles was the natural state of their motion. Indeed, primary motion was derived from the outermost sphere, the seat of the unchangeable stars and of divine power. No further explanation was needed beyond the aesthetic one. In this scheme, the imperfect motion of comets had to be postulated as meteorological phenomena that took place within the imperfect atmosphere of the Earth.

      The great merit of Aristotle's system was its internal logic, a grand attempt to unify all branches of human knowledge within the scope of a single self-consistent and comprehensive theory. Its great weakness was that its rigid arguments rested almost entirely on aesthetic grounds; it lacked a mechanism by which empirical knowledge gained from experimentation or observation could be used to test, modify, or reject the fundamental principles underlying the theory. Aristotle's system had the underlying philosophical drive of modern science without its flexible procedure of self-correction that allows the truth to be approached in a series of successive approximations.

      With the fall of the Roman Empire in AD 476, much of what was known to the Greeks was lost or forgotten—at least to Western civilizations. (Hindu astronomers still taught that the Earth was a sphere and that it rotated once daily.) The Aristotelian system, however, resonated with the teachings of the Roman Catholic Church during the Middle Ages, especially in the writings of St. Thomas Aquinas in the 13th century, and later, during the period of the Counter-Reformation in the 16th and early 17th century, it ascended to the status of religious dogma. Thus did the notion of an Earth-centred universe become gradually enmeshed in the politics of religion. Also welcome in an age that insisted on a literal interpretation of the Scriptures was Aristotle's view that the living species of the Earth were fixed for all time. What was not accepted was Aristotle's argument on logical grounds that the world was eternal, extending infinitely into the past and the future even though it had finite spatial extent. For the church, there was definitely a creation event, and infinity was reserved for God, not space or time.

The Copernican revolution
 The Renaissance brought a fresh spirit of inquiry to the arts and sciences. Explorers and travelers brought home the vestiges of classical knowledge that had been preserved in the Muslim world and the East, and in the 15th century Aristarchus' heliocentric hypothesis again came to be debated in certain educated circles. The boldest step was taken by the Polish astronomer Nicolaus Copernicus (Copernicus, Nicolaus), who hesitated for so long in publication that he did not see a printed copy of his own work until he lay on his deathbed in 1543. Copernicus recognized more profoundly than anyone else the advantages of a Sun-centred planetary system (Copernican system). (See Copernicus' theory of the solar system—>.) By adopting the view that the Earth circled the Sun, he could qualitatively explain the to-and-fro wanderings of the planets much more simply than Ptolemy. For example, at certain times in the motions of the Earth and Mars about the Sun, the Earth would catch up with Mars's projected motion, and then that planet would appear to go backward through the zodiac. Unfortunately in his Sun-centred system, Copernicus continued to adhere to the established tradition of using uniform circular motion, and if he adopted only one large circle for the orbit of each planet, his calculated planetary positions would in fact be quantitatively poorer in comparison with the observed positions of the planets than tables based on the Ptolemaic system. This defect could be partially corrected by providing additional smaller circles, but then much of the beauty and simplicity of Copernicus' original system would be lost. Moreover, though the Sun was now removed from the list of planets and the Earth added, the Moon still needed to move around the Earth.

      It was Galileo who exploited the power of newly invented lenses to build a telescope that would accumulate indirect support for the Copernican viewpoint. Critics had no rational response to Galileo's discovery of the correlation of Venus' phases of illumination with its orbital position relative to the Sun, which required it to circle that body rather than the Earth. Nor could they refute his discovery of the four brightest satellites of Jupiter (the so-called Galilean satellites), which demonstrated that planets could indeed possess moons. They could only refuse to look through the telescope or refuse to see what their own eyes told them.

      Galileo also mounted a systematic attack on other accepted teachings of Aristotle by showing, for example, that the Sun was not perfect but had spots. Besieged on all sides by what it perceived as heretical stirrings, the church forced Galileo to recant his support of the heliocentric system in 1633. Confined to house arrest during his last years, Galileo would perform actual experiments and thought experiments (summarized in a treatise) that would refute the core of Aristotelian dynamics. Most notably, he formulated the concept that would eventually lead (in the hands of René Descartes) to the so-called first law of mechanics—namely, that a body in motion, freed from friction and from all other forces, would move, not in a circle, but in a straight line at uniform speed. The frame of reference for making such measurements was ultimately the “fixed stars.” Galileo also argued that, in the gravitational field of the Earth and in the absence of air drag, bodies of different weights would fall at the same rate. This finding would eventually lead (in the hands of Einstein) to the principle of equivalence, a cornerstone of the theory of general relativity.

 It was the German astronomer Johannes Kepler (Kepler, Johannes), a contemporary of Galileo, who would provide the crucial blow that assured the success of the Copernican revolution. (See Kepler's theory of the solar system—>.) Of all the planets whose orbits Copernicus had tried to explain with a single circle, Mars had the largest departure (the largest eccentricity, in astronomical nomenclature); consequently, Kepler arranged to work with the foremost observational astronomer of his day, Tycho Brahe of Denmark, who had accumulated over many years the most precise positional measurements of this planet. When Kepler finally gained access to the data upon Tycho's death, he painstakingly tried to fit the observations to one curve after another. The work was especially difficult because he had to assume an orbit for the Earth before he could self-consistently subtract the effects of its motion. Finally, after many close calls and rejections, he hit upon a simple, elegant solution—an ellipse with the Sun at one focus. The other planets also fell into place. This triumph was followed by others, notable among which was Kepler's discovery of his so-called three laws of planetary motion. The empirical victory secure, the stage was set for Newton's (Newton, Sir Isaac) matchless theoretical campaigns.

      Two towering achievements paved the way for Newton's conquest of the dynamical problem of planetary motions: his discoveries of the second law of mechanics and of the law of universal gravitation. The second law of mechanics generalized the work of Galileo and Descartes on terrestrial dynamics, asserting how bodies generally move when they are subjected to external forces. The law of universal gravitation generalized the work of Galileo and the English physicist Robert Hooke on terrestrial gravity, asserting that two massive bodies attract one another with a force directly proportional to the product of their masses and inversely proportional to the square of their separation distance. By pure mathematical deduction, Newton showed that these two general laws (whose empirical basis rested in the laboratory) implied, when applied to the celestial realm, Kepler's three laws of planetary motion. This brilliant coup completed the Copernican program to replace the old worldview with an alternative that was far superior, both in conceptual principle and in practical application. In the same stroke of genius, Newton unified the mechanics of heaven and Earth and initiated the era of modern science.

      In formulating his laws, Newton asserted as postulates the notions of absolute space (in the sense of Euclidean geometry) and absolute time (a mathematical quantity that flows in the universe without reference to anything else). A kind of relativity principle did exist (“Galilean relativity”) in the freedom to choose different inertial frames of reference—i.e., the form of Newton's laws was unaffected by motion at constant velocity with respect to the “fixed stars.” However, Newton's scheme unambiguously sundered space and time as fundamentally separate entities. This step was necessary for progress to be made, and it was such a wonderfully accurate approximation to the truth for describing motions that are slow compared to the speed of light that it withstood all tests for more than two centuries.

      In 1705 the English astronomer Edmond Halley (Halley, Edmond) used Newton's laws to predict that a certain comet last seen in 1682 would reappear 76 years later. When Halley's comet returned on Christmas night 1758, many years after the deaths of both Newton and Halley, no educated person could ever again seriously doubt the power of mechanistic explanations for natural phenomena. Nor would anyone worry again that the unruly excursions of comets through the solar system would smash the crystalline spheres that earlier thinkers had mentally constructed to carry planets and the other celestial bodies through the heavens. The attention of professional astronomers now turned increasingly toward an understanding of the stars.

      In the latter effort, the British astronomer William Herschel (Herschel, Sir William) and his son John (Herschel, Sir John, 1st Baronet) led the assault. The construction of ever more powerful reflecting telescopes allowed them during the late 1700s and early 1800s to measure the angular positions and apparent brightnesses of many faint stars. In an earlier epoch, Galileo had turned his telescope to the Milky Way and saw that it was composed of countless individual stars. Now the Herschels began an ambitious program to gauge quantitatively the distribution of the stars in the sky. On the assumption (first adopted by the Dutch mathematician and scientist Christiaan Huygens) that faintness is a statistical measure of distance, they inferred the enormous average separations of stars. This view received direct confirmation for the nearest stars through parallax measurements of their distances from the Earth. Later, photographs taken over a period of many years also showed that some stars changed locations across the line of sight relative to the background; thus, astronomers learned that stars are not truly fixed, but rather have motions with respect to one another. These real motions—as well as the apparent ones due to parallax, first measured by the German astronomer Friedrich Bessel in 1838—were not detected by the ancients because of the enormous distance scale of the stellar universe.

Perceptions of the 20th century
Kapteyn's statistical studies
      The statistical studies based on these new perceptions continued into the early 20th century. They culminated with the analysis by the Dutch astronomer Jacobus Cornelius Kapteyn (Kapteyn, Jacobus Cornelius) who, like William Herschel before him, used number counts of stars to study their distribution in space. It can be shown for stars with an arbitrary but fixed mixture of intrinsic brightnesses that—in the absence of absorption of starlight—the number N of stars with apparent brightness, energy flux f, larger than a specified level f0, is given by N = Af0-3/2, where A is a constant, if the stars are distributed uniformly in Euclidean space (space satisfying the principles of Euclidean geometry). The number N would increase with decreasing limiting apparent brightness f0, because one is sampling, on average, larger volumes of space when one counts fainter sources. Kapteyn found that the number N increased less rapidly with decreasing f0 than the hypothetical value Af0-3/2; this indicated to him that the solar system lay near the centre of a distribution of stars, which thinned in number with increasing distance from the centre. Moreover, Kapteyn determined that the rate of thinning was more rapid in certain directions than in others. This observation, in conjunction with other arguments that set the scale, led him in the first two decades of the 20th century to depict the Milky Way Galaxy (then confused with the entire universe) as a rather small, flattened stratum of stars and gaseous nebulas in which the number of stars decreased to 10 percent of their central value at a distance in the plane of about 8,500 light-years from the galactic centre.

Shapley's contributions
      In 1917 the American astronomer Harlow Shapley (Shapley, Harlow) mounted a serious challenge to the Kapteyn universe. Shapley's study of the distances of globular clusters led him to conclude that their distribution centred on a point that lay in the direction of the constellation Sagittarius and at a distance that he estimated to be about 45,000 light-years (50 percent larger than the modern value). Shapley was able to determine the distance to the globulars through the calibration of the intrinsic brightnesses of some variable stars found in them. (Knowing the period of the light variations allowed Shapley to infer the average intrinsic brightness. A measurement of the average apparent brightness then allowed, from the 1/r2 law of brightness, a deduction of the distance r.) According to Shapley, the galactic system was much larger than Kapteyn's estimate. Moreover, the Sun was located not at its centre but rather at its radial outskirts (though close to the midplane of a flattened disk). Shapley's dethronement of the Sun from the centre of the stellar system has often been compared with Copernicus' dethronement of the Earth from the centre of the planetary system, but its largest astronomical impact rested with the enormous physical dimensions ascribed to the Galaxy. In 1920 a debate was arranged between Shapley and Heber D. Curtis to discuss this issue before the National Academy of Sciences in Washington, D.C.

      The debate also addressed a second controversy—the nature of the so-called spiral nebulas. Shapley and his adherents held that these objects were made up of diffuse gas and were therefore similar to the other gas clouds known within the confines of the Milky Way Galaxy. Curtis and others, by contrast, maintained that the spirals consisted of stars and were thus equivalent to independent galaxies coequal to the Galaxy. A parallel line of thought had been proposed earlier by the philosophers Immanuel Kant and Thomas Wright and by William Herschel. The renewed argument over the status of the spirals grew in part out of an important development that occurred around the turn of the 20th century: the astronomical incorporation of the methods of spectroscopy both to study the physical nature of celestial bodies and to obtain the component of their velocities along the line of sight. By analyzing the properties of spectral lines in the received light (e.g., seeing if the lines were produced by absorption or emission and if the lines were broad or narrow), or by analyzing the gross colours of the observed object, astronomers learned to distinguish between ordinary stars and gaseous nebulas existing in the regions between stars. By measuring the displacement in wavelength of the spectral lines with respect to their laboratory counterparts and assuming the displacement to arise from the Doppler effect, they could deduce the velocity of recession (or approach). The spirals posed interpretative difficulties on all counts: they had spectral properties that were unlike either local collections of stars or gaseous nebulas (because of the unforeseen roles of dust and different populations of stars in the arms, disk, and central bulge of a spiral galaxy); and, as had been shown by the American astronomer Vesto Slipher, they generally possessed recession velocities that were enormous compared to those then known for any other astronomical object.

      The formal debate between Shapley and Curtis ended inconclusively, but history has proved Shapley to be mostly right on the issue of the off-centre position of the solar system and the large scale of the Galaxy, and Curtis to be mostly right on the issue of the nature of the spirals as independent galaxies. As demonstrated in the work of the Swiss-born U.S. astronomer Robert J. Trumpler in 1930. Kapteyn (and Herschel) had been misled by the effects of the undiscovered but pervasive interstellar dust to think that the stars in the Milky Way thinned out with distance much more quickly than they actually do. The effect of interstellar dust was much less important for Shapley's studies because the globular clusters mostly lie well away from the plane of the Milky Way system.

Hubble's research on extragalactic systems
      The decisive piece of evidence concerning the extragalactic nature of the spirals was provided in 1923–24 by Hubble, who succeeded in resolving one field in the Andromeda galaxy (M31) into a collection of distinct stars. Some of the stars proved to be variables of a type similar to those found by Shapley in globular clusters. Measurements of the properties of these variables yielded estimates of their distances. As it turned out, the distance to M31 put it well outside the confines of even Shapley's huge model of the Galaxy, and M31 therefore must be an independent system of stars (and gas clouds).

      Hubble's findings inaugurated the era of extragalactic astronomy. He himself went on to classify the morphological types of the different galaxies he found: spirals, ellipticals, and irregulars. In 1926 he showed that, apart from a “zone of avoidance” (avoidance, zone of) (region characterized by an apparent absence of galaxies near the plane of the Milky Way caused by the obscuration of interstellar dust), the distribution of galaxies in space is close to uniform when averaged over sufficiently large scales, with no observable boundary or edge. The procedure was identical to that used by Kapteyn and Herschel, with galaxies replacing stars as the luminous sources. The difference was that this time the number count N was proportional to f0-3/2, to the limits of the original survey. Hubble's finding provided the empirical justification for the so-called cosmological principle, a term coined by the English mathematician and astrophysicist Edward A. Milne to describe the assumption that at any instant in time the universe is, in the large, homogeneous and isotropic—i.e., statistically the same in every place and in every direction. This represented the ultimate triumph for the Copernican revolution.

      It was also Hubble who interpreted and quantified Slipher's results on the large recessional velocities of galaxies—they correspond to a general overall expansion of the universe. The Hubble law, enunciated in 1929, marked a major turning point in modern thinking about the origin and evolution of the Cosmos. The announcement of cosmological expansion came at a time when scientists were beginning to grapple with the theoretical implications of the revolutions taking place in physics. In his theory of special relativity, formulated in 1905, Einstein (Einstein, Albert) had effected a union of space and time, one that fundamentally modified Newtonian perceptions of dynamics, allowing, for example, transformations between mass and energy. In his theory of general relativity, proposed in 1916, Einstein effected an even more remarkable union, one that fundamentally altered Newtonian perceptions of gravitation, allowing gravitation to be seen, not as a force, but as the dynamics of space-time. Taken together, the discoveries of Hubble and Einstein gave rise to a new worldview. The new cosmology gave empirical validation to the notion of a creation event; it assigned a numerical estimate for when the arrow of time first took flight; and it eventually led to the breathtaking idea that everything in the universe could have arisen from literally nothing (see below).

Components of the universe

Planetary systems
      Although it is commonly believed that planetary systems are plentiful in the universe, the only example known with certainty is the solar system. The solar system is conventionally taken to contain the Sun, the major planets and their satellites, dwarf planets, asteroids, comets, interplanetary dust, and interplanetary particles and fields largely associated with the solar wind. Humanity's knowledge of these objects has expanded greatly owing to space exploration. Combined with centuries of intense astronomical observation and theoretical calculation, data transmitted by spacecraft have shed considerable light on the relation between the solar system and the rest of the universe, the problem of the origin of the Earth and the other planets, and the question of the likelihood of comparable planetary systems around other stars.

The Sun
      At the centre of the solar system lies the Sun. Energetically and dynamically, it is the dominant influence in the solar system. The mass of the Sun can be measured from its gravitational pull on the planets and equals 2 × 1033 g (grams), 1,000 times more massive than Jupiter and 330,000 times more massive than the Earth. As a fraction of its mass, the atmospheric composition of the Sun is probably 72 percent hydrogen, 26 percent helium, and 2 percent elements heavier than hydrogen and helium. Because there is little mixing between the atmosphere and the deep interior (where nuclear reactions occur), this composition is believed to be the one that the Sun was born with. A gas with approximately the solar mix of elements is said to have cosmic abundances because a similar composition is found for most other stars as well as for the medium between the stars.

      The observed rate of release of radiant energy by the Sun equals 3.86 × 1033 erg/sec (ergs per second). The particles of radiation (photons) stream more or less freely from a layer called the photosphere, which in the Sun is at a temperature of about 5,800 K (kelvins; 5,500° C or 10,000° F). The distribution of wavelengths is characteristic of a thermal body radiating at such a temperature; therefore, in accordance with Planck's law, it peaks in the yellow part of the visible spectrum. The solar luminosity is enormous, but it is much less than it would be if the photons in the hot interior of the Sun could also stream freely. However, the high opacity of the material regulates the actual outward progress of the photons to a slow stately diffusion. Indeed, the blockage of diffusive heat is so severe in the envelope of the Sun that its layers are unstable to the development of convection currents, which gives the atmosphere of the Sun a granular appearance.

      The observed radius of the Sun equals 6.95 × 1010 cm and is understood to be the result of a balance of forces between the Sun's self-gravity and the pressure of its hot gases, which exist in a nearly fully ionized state (a plasma of positive ions and free electrons) in the deep interior. The plasma in the core of the Sun is compressed to temperatures (about 1.5 × 107 K) that are sufficient to provide a rate of thermonuclear reactions that just offsets the slow diffusive loss of radiative heat. Thus, the Sun constitutes a controlled fusion reactor capable of sustaining its present steady loss of radiant energy for a full 9 × 109 years before all of its initial supply of hydrogen fuel in the core has been converted into helium. From the radioactive dating of meteorites, it has been estimated that the solar system is 4.6 × 109 years old. If this is the age of the Sun, then it is roughly midway through the phase of stable core hydrogen fusion—i.e., the “main-sequence” phase of stellar evolution.

      The Sun is too opaque to electromagnetic radiation to allow a direct look at the nuclear reactions inferred to take place in its interior. Weakly interacting particles called neutrinos offer a better probe of such reactions because they fly relatively freely from the centre of the Sun. Attempts to measure solar neutrinos by means of radioactive chlorine techniques have found levels that are only about one-third the best theoretical predictions. One possible explanation supposes that neutrinos possess mass and can be converted to (oscillating) forms undetectable by conventional schemes during their passage through the dense solar plasma. Unfortunately, experiments using purified water or large amounts of gallium as the detecting medium have contributed conflicting data with respect to this interpretation.

      An indirect line of evidence suggests that the source of the discrepancy may lie more with unknown neutrino physics than with uncertain solar models. Precise measurements of the small oscillations of the solar surface induced presumably by motions in the convection zone allow astronomers to study the properties of waves propagating through the Sun's interior in an analogous fashion to how earthquakes allow geologists to study the properties of the Earth's interior. These investigations reveal that the Sun behaves similarly, though not exactly, as the best theoretical solar models predict. They also show the Sun's radiative core to rotate at about the same angular speed as the mid-latitudes of the solar surface, too slow to have any of the anomalous mechanical or thermal effects that have sometimes been hypothesized for it.

      The outermost layer of the Sun turns once every 25 days at the equator, once every 35 days at the poles. This differential rotation may couple with the Sun's convection zone to produce a dynamo action that amplifies magnetic fields. The basic idea is that magnetic fields carried upward (or downward) by convection currents are twisted and amplified by the differential rotation. “Ropes” of high field strength buoy to the surface where they pop out as loops into the corona of the Sun. The corona is an extended region containing very rarefied gas that lies above the photosphere and a transition region called the chromosphere; the temperature of the corona is about 2 × 106 K. The anchor points of the ropes of high magnetic flux in the photosphere correspond to sunspots, regions where the gas is cooler than the average photospheric temperature of 5,800 K. Thus, these spots appear relatively dark against the bright yellow background of the general photosphere.

      Sunspots appear, migrate about the solar surface, and disappear as the plasma to which they are anchored moves under the influence of rotation and convection. The average number of sunspots increases and decreases more or less regularly in an 11-year cycle; however, there have been prolonged minima in history. It has been proposed that these prolonged minima correlate with changing climate conditions on the Earth, although the precise mechanisms for effecting such changes remain unclear.

      Other manifestations of magnetic activity arise because of the motion of the flux ropes. It is believed that flares occur on those occasions when two flux ropes of opposite polarity are pressed against each other, and the opposing magnetic fields annihilate in a catastrophic event of magnetic reconnection. The energy stored in the field is thought to go into accelerating fast particles (solar cosmic rays) and into heating the ambient gas, which, being rarefied, has very little heat capacity. Magnetic activity of this type may be what maintains the corona at much higher temperatures than the photosphere.

      Pictures of the solar corona taken during the U.S.-manned Skylab missions (1973) showed that hot coronal gas trapped in closed loops of field lines becomes dense enough to emit appreciable amounts of X rays. In contrast, coronal holes lacking X-ray emission correspond to regions where the magnetic field is too weak to keep the gas trapped and the hot gas has burst open the magnetic-field configuration, expanding away from the surface of the Sun as part of a general solar wind.

      The presence of a solar wind blowing through interplanetary space was first deduced from observations made during the 1950s of the ion tails of comets. With the advent of Earth-orbiting satellites, the particles and fields carried by the solar wind could be measured directly. When the wind blows past the Earth, it contains on average about five particles per cubic centimetre (mostly protons, the nuclei of hydrogen atoms) moving at about 500 km/sec (kilometres per second), but these numbers fluctuate greatly depending on the phase of the solar magnetic cycle and the presence or absence of recent flare activity.

Planets and their satellites
      Clues as to how the planets were formed lie in the regularities of their orbital motions, their satellite systems, and their chemical compositions. Compared to their sizes, the separations of planets from each other are enormous; and, apart from a diffuse solar wind and minor debris, interplanetary space is remarkably empty. Thus, as a general rule, the planets have been well isolated dynamically and chemically since their birth, and the present configuration of the solar system provides hints of the initial conditions, in spite of the more than 4 × 109 years of subsequent evolution.

      With the exception of Mercury, the orbits of the major planets are all nearly circular; they lie within a few degrees of the same plane; and they have the same direct sense of revolution as the rotation of the Sun. Since these facts were first noted, they have suggested to philosophers and scientists such as Kant and Pierre-Simon Laplace of France that the planets of the solar system must have originally formed from a flat nebular disk that revolved about the primitive Sun. The exception, Mercury, is not troublesome; it suffers strong resonant interactions with other bodies that may have considerably modified its original orbital characteristics.

      In the inner planetary system where the terrestrial planets—Mercury, Venus, Earth, and Mars—reside, the distance between successive planets is relatively small in comparison with the outer planetary system where the Jovian planets—Jupiter, Saturn, Uranus, and Neptune—reside. Moreover, the terrestrial planets are small and rocky or ironlike, while the Jovian planets (also called the giant planets) are large and gaseous or icy. Neither the terrestrial nor the Jovian planets exhibit the chemical elements in their cosmic proportions, but the latter, particularly Jupiter and Saturn, approach these proportions to a much closer degree. This implies that the process of planet building, unlike the mechanism of star formation, probably involves forces other than just gravity, for gravitation is universal and does not distinguish between different elements if they are in a gaseous form. Condensation (i.e., the separation of solid phases of matter from gaseous phases if the temperature drops to sufficiently low values) suggests itself as an important process.

      From this point of view, the terrestrial planets have managed only to gather into their bodies mostly materials containing elements heavier than hydrogen and helium—materials such as silicate rocks and metallic iron or nickel, which can condense as solids from a gaseous phase even at relatively high temperatures (between 1,200 and 2,000 K). In contrast, Uranus and Neptune have not only accumulated rocky and metallic compounds but also ices of water, ammonia, and methane, which can condense from nebular gas only at much lower temperatures (between 100 and 200 K). Jupiter and Saturn succeeded additionally in capturing substantial amounts of hydrogen and helium (in their envelopes). Since hydrogen and helium at plausible nebular pressures do not solidify unless the temperature is lower than even in the coldest regions of interstellar space, this suggests that in the two largest planets of the solar system gravitation did play a role in the direct acquisition of massive amounts of these gases.

      The terrestrial and Jovian planets possess other systematic differences: the former generally have no rings or satellites, while the latter each have a set of rings and many satellites. Here, Earth and Mars are exceptions to the rule. Earth has of course one satellite, the Moon; Mars has two, Phobos and Deimos. Of these exceptions, the more difficult case to explain has long remained the Moon because it is an unusually large object for a satellite. Indeed, the Moon is only somewhat smaller than the largest and most massive satellites in the solar system: Jupiter's Ganymede, Saturn's Titan, and Neptune's Triton. In comparison, Phobos and Deimos are tiny objects that may well have been captured after Mars had already formed.

      The satellite and ring systems of the giant planets, particularly those of Jupiter and Saturn, resemble miniature planetary systems. As an analogy, one may say that moons and rings are to the giant planets what the planets and the asteroid belt are to the Sun. The moons of the giant planets can be classified as either regular or irregular. The regular satellites have nearly circular orbits lying in the same plane as the equator of the parent planet and revolve in the same direction as its rotation. The irregular satellites violate one or more of the above rules. In addition, they generally tend to be small bodies and to lie at large distances from the central planet. The regular satellites may have formed from protoplanetary disks that encircled the planet in the same manner as a protostellar disk encircled the Sun in the nebular hypothesis. The most likely explanation for the irregular satellites is that they are captured bodies.

      The thin flat rings that encircle Jupiter, Saturn, Uranus, and Neptune are composed of innumerable small solid bodies. Each piece of the ring is in a nearly perfect circular orbit about the central planet. Theory suggests that noncircular motions are damped by mutual inelastic collisions of the particulate matter to very small values. These collisions would have led to gradual agglomeration into larger bodies had the rings not lain in such close proximity to the planet (i.e., within the Roche limit). The strong tidal forces that exist inside the Roche limit of a planet are believed to be capable of tearing apart loosely bound aggregates of particulate matter and thereby preventing their agglomeration into moons. It is unclear, however, whether planetary rings are the natural debris left over from an earlier period of satellite formation in a protoplanetary disk that extended almost to the planet's surface or whether they arose from the more recent breakup and erosion (by continual collisions and by micrometeoroid bombardment) of some larger parent body. There does exist some evidence from dynamic studies of the gravitational interactions of the rings and satellites of Saturn that the rings may be appreciably younger than the solar system in general.

Asteroids, meteoroids, comets, and interplanetary dust
      In addition to the Sun and its wind and the planets and their satellites, the solar system contains a large number of minor bodies. The most conspicuous of these are the asteroids and comets. Smaller bodies also exist—meteoroids, micrometeoroids, and interplanetary dust—but these probably are fragments of the larger asteroids and comets. Indeed, there is a continuous distribution of minor bodies in the solar system, from dust particles with radii of only a fraction of a micrometre to asteroids (or minor planets) with radii of several hundred kilometres.

      Asteroids are rocky or iron-bearing bodies found orbiting the Sun in great numbers in a belt between Mars and Jupiter. Nearly all of the total mass of the asteroids, about 10-3 that of the Earth, is contained in the largest examples such as Ceres, Pallas, and Vesta, but the largest numbers have radii of one to 10 kilometres (the lower limit being more a matter of nomenclature than of measurement). A few bodies, as, for example, Chiron, lie outside the belt between Mars and Jupiter. The exceptions, however, are relatively rare. The theoretical understanding of this observational result lies in computer simulations that show that an asteroid placed almost anywhere else in the solar system besides the known asteroid belt would be unstable owing to gravitational perturbations by the planets. If the early solar system were littered with asteroid-sized bodies, then the emergence of the planets would have swept interplanetary space relatively clean except for the debris that happened to have orbits fit for survival.

      Meteoroids are chunks of asteroids or comets that have Earth-crossing orbits. One theory for the production of meteoroids has them originating from the shattering of two asteroids that collide violently in space. Some of the pieces may subsequently suffer resonant interactions with Jupiter, which throw them in 10,000 to 100,000 years into elongated Earth-crossing orbits. A meteoroid entering the Earth's atmosphere will heat up during the passage and become a meteor, a fiery “shooting star.” If the mass of the meteor exceeds one kilogram, it can survive the flight and land on the ground as a meteorite. Meteorites come in three basic compositions: stones, stony irons, and irons. Radioactive dating of meteorites establishes that they have a narrow range of ages. The time since their parent bodies first solidified equals about 4.6 × 109 years, which yields the conventional estimate for the age of the entire solar system.

      The cratering records on the airless (and therefore erosion-free) Moon and Mercury are consistent with a very heavy period of meteoritic impacts during the first several hundred million years of the history of the solar system, with the bombardment tailing off dramatically about 4 × 109 years ago. This picture suggests that primitive asteroids and meteoroids may have been the building blocks (“planetesimals”) of the terrestrial planets (and perhaps also the cores of the giant planets) and that the present-day asteroids failed to be gathered into another full-fledged planet because their noncircular velocities are so high (probably owing to the past near-resonant action of Jupiter's gravitational perturbations) as to cause them generally to shatter rather than to agglomerate when they collide.

      Comets also are cosmic debris, probably planetesimals that originally resided in the vicinity of the orbits of Uranus and Neptune rather than in the warmer regions of the asteroid belt. Thus, the nuclei of comets are icy balls of frozen water, methane, and ammonia, mixed with small pieces of rock and dust, rather than the largely volatile-free stones and irons that typify asteroids. In the most popular theory, icy planetesimals in the primitive solar nebula that wandered close to Uranus or Neptune but not close enough to be captured by them were flung to great distances from the Sun, some to be lost from the solar system while others populated what was to become a great cloud of cometary bodies, perhaps 10 trillion in number. Such a cloud was first hypothesized by the Dutch astronomer Jan Hendrik Oort.

      In the original version of the theory, the Oort cloud extended tens of thousands of times farther from the Sun than the Earth, a significant fraction of the way to the nearest stars. Random encounters with passing stars would periodically throw some of the comets into new orbits, plunging them back toward the heart of the solar system. As a comet nears the Sun, the ices begin to evaporate, loosening the trapped dust and forming a large coma that completely surrounds the small nucleus, which is the ultimate source of all the material. The solar wind blows back the evaporating gas into an ion tail, and radiation pressure pushes back the small particulate solids into a dust tail. Each solid particle is now an independently orbiting satellite of the Sun, and the accumulation of countless such passages by many comets contributes to the total quantity of dust particles and micrometeoroids found in interplanetary space.

      The total mass contained in all the comets is highly uncertain. Modern estimates range from 1 to 100 Earth masses. Part of the uncertainty concerns the reality of a hypothesized massive “inner Oort cloud”—or “Kuiper belt” (if the distribution is flattened)—of comets that would exist at distances from the Sun 40 to 10,000 times that of the orbit of the Earth. At such locations, the comets would not be much perturbed by typical passing stars nor by the gravity of the planets of the solar system, and the comets could reside in the inner cloud or belt for long periods of time without detection. It has been speculated, however, that a rare close passage by another star (possibly an undetected companion of the Sun) may send a shower of such comets streaming toward the inner solar system. If enough large cometary nuclei in such showers happen to strike the Earth, the clouds of dust and ash that they would raise might be sufficient to trigger mass biological extinctions. An event of this kind appears especially promising for explaining the relatively sudden disappearance of the dinosaurs from the Earth.

Origin of the solar system
      Modern versions of the nebular hypothesis all begin with the collapse of a rotating interstellar cloud that is destined to form the solar system. The tendency to conserve angular momentum causes the falling gas to spin faster and flatten, eventually forming a central concentration (protosun) surrounded by a rotating disk of matter. Detailed calculations show that there may be a prolonged phase of infall that continues to build up a disk of increasing mass and size. There also may be some accretion of the material in the disk onto the star, the process transferring mass inward and angular momentum outward, which helps to explain why the Sun presently contains 99.9 percent of the total mass of the solar system but only 2 percent of the total angular momentum.

      Because the chemical compositions of the planets as a function of increasing radial distance from the Sun follow a pattern that corresponds to sequential condensation from a gaseous state, cosmochemists originally postulated, for simplicity, that the solar nebula began in a hot and purely gaseous state. Small pieces of solids were then imagined to have condensed from the gas in the disk as the latter slowly cooled from high temperatures, with the coolest final temperatures being reached at the greatest distance from the centre. The process is akin to soot forming out of a smoking candle flame. Astronomical observations, however, show that dust grains of approximately the correct composition already exist in the interstellar medium, and theoretical calculations indicate that the refractory cores of the grains would survive introduction into most of the primitive solar nebula. The icy mantles that coat the grain cores would, however, be evaporated away in the inner solar system. It is probable, therefore, that the systematics of the observed planetary compositions reflect not a condensation sequence but rather an evaporation sequence.

      In any case, whether the dust particles form by chemical condensation from the nebular gas or exist from the start, there seems little doubt that they would grow rapidly by various agglomeration processes and dissipatively settle into a thin layer of particulate matter in the midplane of the disk. Planetesimals of the sizes of asteroids and the nuclei of comets accumulate in this thin layer and further grow by gravitational processes into full-sized planets. The formation of the planets under these dissipative circumstances would explain why their orbits are nearly coplanar and circular.

      Insofar as the planets first grow by the accumulation of solids, it is interesting to note that observations indicate all four Jovian planets to have rocky and icy cores containing 15–25 Earth masses. In addition to such cores, Jupiter and Saturn have hydrogen and helium envelopes amounting to about 300 and 70 Earth masses, respectively. This suggests, as theoretical calculations bear out, that 15–25 Earth masses represents a critical mass above which a growing planet in the solar nebula will begin to gravitationally gather nebular gas faster than it will accumulate solids. Indeed, once a protoplanet becomes massive enough, it can efficiently eject solid bodies as well as capture them. (The ones catapulted out by Jupiter and Saturn are likely to escape the system altogether.) In this way did Jupiter and Saturn become large and grow to occupy large areas.

      Why Uranus and Neptune did not also gather massive gaseous envelopes is somewhat of a mystery. One possible theory is that, at the distances of Uranus and Neptune in the solar nebula, energetic radiation from the young Sun can dissociate hydrogen molecules and ionize the resultant atoms, heating the surface layers strongly enough (to about 10,000 K) to disperse the nebular gas over a period of about 107 years. The full accumulation of the planetary cores of Uranus and Neptune probably took longer, and therefore their formation occurred in a relatively gas-free environment.

      The growth of the dwarf planet Pluto through the aggregation of many millions of cometlike bodies may have been limited by having to occur at the outermost fringes of the primitive solar nebula. Its moon, Charon, may have resulted either through fission of a rapidly rotating common parent body or through a late encounter and capture. Icy planetesimals that had close but noncolliding encounters with Uranus and Neptune either were thrown into the Sun (or into other planets) or now populate the Oort cloud of comets.

      Interior to Jupiter the planets are all small. A plausible explanation follows from the observation that the solar nebula inside Jupiter's orbit may have been too hot to allow methane, ammonia, and water to exist in solid form. Computer simulations by the American geophysicist George Wetherill show that, restricted to the accumulation of only the rarer rocks and irons, the rapid runaway growth of planetesimals to embryos in the inner solar system stalls at masses comparable to the Moon's. Once a few hundred embryos of Moon-like masses have accumulated most of the solid matter in their immediate “feeding” zones, it takes them more than 108 years gravitationally to pump up each other's eccentricities and aggregate through orbit crossings into four terrestrial planets.

      A long duration for the formation of the terrestrial planets (supported by crater counts that indicate a prolonged period of bombardment extending over some 5 × 108 years) suggests that Jupiter may have finished forming before the terrestrial planets did. A massive body at Jupiter's orbit may have then so stirred up the orbits of the planetesimals in the asteroid belt as to have prevented them from accumulating into a large body (see above). A fully formed Jupiter also may have stunted the growth of nearby Mars, explaining why Mars is so much smaller a terrestrial planet than either Venus or Earth.

      The giant planets may also have sent fairly large bodies careening through the early solar system. In one version of the event, by the American astrophysicist Alastair G.W. Cameron and coworkers, a Mars-sized body crashed obliquely into the primitive Earth. The molten core of the intruder sank to the centre of the molten proto-Earth, but mantle material from both bodies went into orbit and eventually reaccreted into the Moon. The formation of the Moon from rocky substances would then explain why the lunar landings found the Moon to be much poorer in iron than the Earth.

      A similar scenario purports to explain a compositional peculiarity in Mercury. A massive body from the asteroid belt sent close to the Sun would acquire such large velocities that on collision with Mercury it would splash off not only its own rocky mantle but much of Mercury's as well. An event of this kind might explain why Mercury has such a small rocky envelope in relation to its iron-nickel core when compared with the same features in Venus, Earth, and Mars.

      Giant impacts would also add a chaotic element to the acquisition of planetary spins. Perhaps this accounts for the fact that, while most of the equators of the planets lie in roughly the same plane as their orbits about the Sun, Venus spins in a retrograde sense, whereas Uranus' spin axis is tilted over on its side. In reconstructing the details of the formation of the solar system, astronomers work under the handicap of not knowing whether certain special features arise as a general rule or as an exceptional circumstance.

Extrasolar planetary systems
      The astronomical detection of planetary systems around other stars would help enormously to loosen the restrictions imposed by being able to study only one example. Although claims have been made for the discovery of planets around pulsars (spinning magnetized neutron stars), relevant comparisons can be made for the solar system only if the central object is a normal star. For such cases, the task of detection is made difficult by the glare of the star. At least two independent lines of evidence exist, however, that relate indirectly to the existence of extrasolar planetary systems.

      First, it is known from studies of gas clouds where stars are currently forming in the Galaxy that such regions generally rotate too quickly to collapse to a single normal star without any companions. Investigators know of many examples where the excess angular momentum has apparently been absorbed in the birth of a nearby orbiting star; indeed, binary stars are known to be the most common outcome of the star-formation process. It is, nevertheless, encouraging that infrared searches for faint companions around apparently single stars have found a few candidates for objects that lie intermediate to the least massive normal star and a giant planet such as Jupiter.

      Second, infrared images taken from Earth-orbiting and ground-based telescopes have found flattened distributions of particulate solids encircling young stars that resemble the type of dusty nebular disk long hypothesized for the origin of the solar system. In a few cases, there have also been detections, from spectroscopic observations at millimetre and near-infrared wavelengths, of gaseous molecular material coextant with the solid particulates. These observations lend strong support to the view that the creation of planetary systems is likely to be a common by-product of the process of star formation.

Stars and the chemical elements
      Stars are the great factories of the universe. They gradually transform the raw material that emerged from the big bang into an array of versatile chemical elements that makes possible the birth of planets and their inhabitants. The empirical evidence for the vital role that stars play in nucleosynthesis lies in the spectroscopic analysis of the atmospheric compositions of different generations of stars. The oldest stars, which belong to globular clusters, possess very little in the way of elements heavier than hydrogen and helium—in some cases, less than 1 percent of the value possessed by the Sun. On the other hand, the youngest stars, which have ages on the order of 106 years, have heavy elements in even slightly greater abundance than the Sun. Astronomers give these results explicit recognition by designating stars with high heavy-element abundance as Population I stars; those with low heavy-element abundance are said to be Population II stars.

      The accepted interpretation of the abundance differences of Populations I and II is that stars synthesize heavy elements in their interiors. In the process of dying, some stars spew great quantities of this processed material into the gas clouds occupying the regions between the stars. The enriched matter then becomes incorporated into a new generation of forming stars, each successive generation having on average a greater proportion of heavy elements (and helium) than the last. During the 20th century astronomers have obtained considerable insight into why these processes should be the natural outcome of the structure and evolution of stars.

Main-sequence structure of the stars
      The same general principles that determine the structure of the Sun apply more broadly to all normal stars: (1) Hydrostatic equilibrium—for a star to be mechanically in equilibrium, the internal pressure must balance the weight of the material on top. (2) Energy transfer—photons diffusively carry energy outward from a hot interior; if the luminosity to be carried exceeds the capacity of photon diffusion, convection ensues. (3) Energy balance—for a star to be thermally in equilibrium, the energy carried outward by radiative diffusion or convection must be balanced by an equal release of nuclear energy; if the rate of thermonuclear fusion is inadequate, gravitational contraction of the central regions will result, usually accompanied by an expansion of the outer layers.

      Most of the time of the luminous stages of a star's life is spent on the main sequence, when it stably fuses hydrogen into helium in its core. The fusion process in a star with mass slightly greater than one solar mass is somewhat different from that in a star of one solar mass or less. In high-mass stars, hydrogen fusion occurs at high temperatures using preexisting nuclei of carbon and nitrogen as catalysts and, in the process, converting much of the carbon into nitrogen. In low-mass stars, hydrogen fusion occurs by direct combination of the hydrogen nuclei or their reaction products. The end product, however, is the same: the conversion of four hydrogen nuclei into one helium nucleus, with the release of the nuclear binding energy as a source of heat for the star.

      The time that a low-mass star spends on the main sequence differs drastically from that of a high-mass star. On the main sequence, a low-mass star spends its nuclear resource thriftily; a high-mass star, prodigiously. Hence, core hydrogen exhaustion for low-mass stars is delayed in comparison to high-mass stars. The main-sequence lifetime of a star half as massive as the Sun is about 3 × 1010 years, whereas that of a star of 50 solar masses would be roughly 3 × 106 years.

      Since the lifetime of a high-mass star is much less than the age of the Galaxy (roughly 1010 years) and since such stars exist during the present epoch in the Galaxy, the formation of high-mass stars must be an ongoing process. This is borne out by observations of the Galaxy and external galaxies, where bright blue stars are always found near giant clouds of gas and dust—the sites of both high-mass and low-mass star formation.

      One of the most important tests for the theory of stellar structure and evolution comes from the examination of star clusters. Star clusters are gravitationally bound stellar groups that occur in two basic types: globular clusters, which typically are rich systems containing perhaps one million members distributed in a compact spherical volume with a strong concentration toward the centre, and open clusters, which typically are poor systems containing 1,000 members or fewer distributed loosely throughout an irregular volume. Globular cluster stars belong to Population II, while open cluster stars belong to Population I.

      All astronomical observations of a star cluster indicate that its members formed from the same parent cloud. Thus, the stars in a cluster have the same age and the same initial compositions; the only notable difference among them is their masses. Since stars of different masses evolve at different rates, it should be possible to see a progression of evolutionary states as stars of increasing mass are considered. The effect is indeed seen, and the comparison of theoretical predictions with astronomical observations of star clusters yields one of the most satisfactory success stories of modern astrophysics. Such studies allow estimates of the ages of star clusters. The oldest turn out to be the globular clusters; they have ages estimated by various investigators between 1 × 1010 and 1.8 × 1010 years. Within the errors of the determinations, the ages of globulars are consistent with the expansion age of the universe—approximately 1.5 × 1010 years—obtained from Hubble's law. Thus, the globular cluster stars in the Galaxy must constitute some of the oldest stars in the Cosmos.

      On the main sequence, a high-mass star is not only much more luminous than a low-mass star, but it also appears much bluer because its surface temperature is a few tens of thousands degrees instead of a few thousand degrees. The difference in surface temperature manifests itself not only in broadband colours but also in the pattern of atomic absorption lines that appear in spectroscopic diagnostics of the star. The Latin letters OBAFGKM are used to classify stars of different spectral types, with O stars having the hottest surface temperatures and M stars the coolest. The Sun is a G star. This classification scheme applies to all stars, not merely to those on the main sequence. To distinguish stars on the main sequence from those in different evolutionary states, astronomers introduced the concept of luminosity class. These categories are designated by Roman numerals from I to V, with I corresponding to supergiants and V to dwarfs. Main-sequence stars are dwarfs because stars have their smallest sizes as luminous objects when they shine by hydrogen fusion in the core, and a small star (dwarf or subgiant) of a given spectral type—i.e., surface temperature—radiates less than a large star (giant or bright giant or supergiant) of the same spectral type. Stars smaller than main-sequence stars are known (white dwarfs, neutron stars, or black holes), but they are very faint and are not normal stars and so are not assigned classifications in the normal scheme.

      About 90 percent of the luminous stars in a galaxy at any given time are on the main sequence. Most of the mass of a galaxy is contained in low-mass stars, but the small number of high-mass stars contributes a disproportionate fraction of the total light, especially at blue wavelengths. Most of the light at red wavelengths comes from evolved stars because all stars tend to become redder as they evolve from the main sequence (i.e., as their surfaces expand and cool). In addition, low-mass stars also tend to brighten as they age.

The end states of stars
      The attempt of stars to achieve mechanical and thermal balance during their luminous lifetime leads inexorably to their demise. The fundamental reason is simple, at least in outline. Because a normal star is composed of ordinary compressible gases, it has to be hot inside to sustain the thermal pressure that resists the inward pull of its self-gravity. On the other hand, interstellar space is dark and cold; radiant heat flows continuously from the star to the universe. The nuclear reserves that offset this steady drain are finite and can only offer temporary respites. When they have run out, the star must die.

      Astronomers believe that there are four possible end states for a star: (1) There may occur a violent explosion that completely overcomes self-gravity and disperses all constituent matter to interstellar space; this would leave nothing behind as the stellar remnant. (2) The free electrons in the core of the star may finally become so densely packed that quantum effects allow them to exert enough pressure (termed electron-degeneracy pressure; see below) to support the star even at zero temperature; this would leave behind a white dwarf as the stellar remnant. (3) If the mass of the core exceeds the maximum value—the Chandrasekhar limit of 1.4 solar masses—allowed for a white dwarf, the compression of the stellar matter may finally be stopped at nuclear densities; this would leave behind a neutron star. (4) If the mass of the core is so large that even nuclear forces are incapable of supporting the star against its self-gravity, the gravitational collapse of the star may continue to a highly singular state at the centre; this would leave behind a black hole.

      Observations of star clusters and highly evolved objects suggest that stars initially less massive than about eight solar masses are able to lose enough of their envelopes in the final stages of normal stellar evolution that their burnt-out cores fall below the Chandrasekhar limit, resulting in a white dwarf remnant. Theoretical calculations are able to reproduce this result if empirical envelope-mass loss rates are adopted for the later stages of the evolution. In the range of 8 to 25 solar masses, the star is believed to suffer an iron-core collapse, giving an implosion of the central regions to form a neutron star and an expulsion of the envelope in a supernova explosion. Above 25 solar masses or so, the situation remains somewhat confused. Some stars may lose so much mass in powerful winds that their hydrogen envelopes are stripped clean. When they finally explode, they do so as supernovas of what astronomers term type Ib or Ic. In other stars, the energy deposited by neutrino emission (see below) may not suffice to blow off the outer layers, and the entire star collapses inward to form a black hole.

      Observations of stellar remnants are reasonably in accord with the above picture. White dwarfs slowly cooling to the same temperature as the universe (3 K) seem to account for most of the dying stars, which is consistent with the fact that most stars are born with relatively low masses. At the sites of some historical supernova explosions, astronomers have found objects called pulsars, which are thought to be rotating magnetized neutron stars. And in some close binary systems, where a normal star is transferring matter to a compact companion, the companion can be inferred in different situations to be a white dwarf, a neutron star, or a black hole.

The evolution of stars
      Whenever nuclear fuel runs out in the central regions of a star (e.g., when hydrogen becomes exhausted at the end of the main-sequence stage of stellar evolution), the core must contract and heat up. This increases the flow of energy to the outside, which accelerates evolution. A shell of material outside the contracting core may become hot enough to trigger thermonuclear fusion, and eventually the central temperature also may rise enough to ignite what was previously nuclear ash into new fuel. The entire process will then repeat. Thus, core fusion of hydrogen into helium can give way to shell hydrogen fusion. This can be followed by helium ignition in the core, with the star now possessing a shell of hydrogen fusing into helium and a core of helium fusing (with itself twice) into carbon. If the temperature rises sufficiently, the carbon can also capture a helium nucleus to become oxygen. Helium exhaustion in the core is followed by helium fusing in a shell and hydrogen fusing in another shell above that. Then, core ignition involving carbon or oxygen fusing with themselves can yield a variety of still heavier elements. The layered shell structure and the chain of possible reactions become more and more complicated, generating along the way such common elements as silicon, sulfur, and calcium, but the process cannot proceed forever. Eventually, if nothing else intervenes, iron will be created. The nucleus of the iron atom is the most bound of all atomic nuclei; it is not possible to release nuclear energy by adding nucleons (i.e., protons and/or neutrons) to iron (or subtracting them). Hence, if iron is created, as in the cores of the more massive stars, the star must come to a catastrophic end because it will continue to lose heat to its surroundings. What happens in computer simulations of this event is that the core of the star implodes, forming a large mass of hot neutrons at temperatures and densities considerably in excess of 109 K and 1014 g/cm3 (grams per cubic centimetre). Under such conditions, huge numbers of neutrinos are released, and these elementary particles appear capable of depositing enough energy into the extremely dense infalling envelope of the star to drive an outwardly propagating shock wave that expels the envelope in a supernova explosion. In this way a wide variety of the nuclear products of stellar evolution can be introduced into the interstellar medium to enrich the general elemental mix. From this point of view, it is encouraging that, apart from hydrogen and helium, elements that are bountiful in the natural environment (and in living species)—carbon, nitrogen, oxygen, silicon, sulfur, calcium, iron, etc.—also lie on the main line of stellar nucleosynthesis.

      The prediction that supernova explosions should liberate huge quantities of neutrinos found confirmation in the sudden brightening in 1987 of a previously known star in the Large Magellanic Cloud. The appearance of Supernova 1987A (SN 1987A), as this object was called, coincided with a burst of neutrino emission recorded by high-energy physics experiments originally designed to detect proton decay (see below). The magnitude and timing of the neutrino burst fit well with the model of the iron-core collapse of a star whose mass on the main sequence amounted to about 20 solar masses. Subsequent measurements of the light curve demonstrated that, in general agreement with nucleosynthetic expectations, SN 1987A ejected about 0.07 solar mass of the radioactive isotope nickel-56, with a half-life of 6 days, which decays into cobalt-56, with a 77-day half-life, and then into stable iron-56.

      Another interesting by-product of the supernova mechanism described above is that large numbers of free neutrons can be liberated in the envelope. Seed nuclei can capture these free neutrons to become heavier and eventually create many of the elements beyond iron in the periodic table, including radioactive species like uranium. Different isotopes of uranium decay at different rates, and knowing the primitive ratios in which supernovas create these isotopes enables radiochemists to compute, from the corresponding measured values in uranium ore, the elapsed time since these isotopes were produced and introduced into the solar system. Depending on the rates of supernova explosions in the history of the Galaxy, these calculations indicate that uranium synthesis began between 6 × 109 and 1.5 × 1010 years ago. This, then, is another method for independently estimating the age of the Galaxy. Again, within the uncertainties of the determination, the value is consistent with the Hubble expansion age (see The extragalactic distance scale and Hubble's constant (Cosmos)).

      The fundamental difference in evolutionary outcomes between high-mass and low-mass stars can be traced to the theory of white dwarfs. Basically, every star eventually tries to generate a white dwarf at its core as it evolves and undergoes core contraction. During the 1920s, with the dawn of modern quantum mechanics, the British physicist Ralph H. Fowler showed that a white dwarf has the peculiar property that the more massive it is, the smaller its radius. The reason is relatively simple: a more massive white dwarf has more self-gravity, and so more pressure is required to counter the stronger gravity. Pressure increases when the degenerate electron gas constituting a white dwarf is compressed; it becomes strong enough to balance gravitational force only at very great densities. Consequently, equilibrium between the internal degeneracy pressure and the force of gravity is reached at a smaller size for a more massive white dwarf.

      The American astrophysicist Subrahmanyan Chandrasekhar made a crucial modification to this hypothesis in order to accommodate Einstein's special theory of relativity. Chandrasekhar showed that relativistic effects imposed an upper limit on the mass of possible white dwarfs. This limit arises because electrons cannot move faster than the speed of light; there comes a point where the increase in internal degeneracy pressure is no longer able to keep the self-gravity from literally trying to crush the star to zero size. For likely white-dwarf compositions, this limit corresponds to 1.4 solar masses as noted above.

      Consider a star that attempts to exceed the Chandrasekhar limit, assuming that it has enough material—even after envelope-mass loss—to try to build a massive white dwarf by depositing layer after layer of nuclear ash into its core. As the limit is approached, the core's outer boundary shrinks almost to arbitrarily small dimensions, generating above it enormous gravitational fields. To counteract the gravity, the pressures in the shell above the core must rise correspondingly, yielding densities and temperatures that are as high as needed to drive all thermonuclear reactions to completion. If nothing else intervenes, this situation must end in the iron catastrophe described above.

      In contrast, in a low-mass star the final mass of the core may end up well below the Chandrasekhar limit. The shells outside the core may still become dense and hot enough to yield copious amounts of hydrogen and helium fusion, and this heat input into the envelope will greatly distend the envelope of the star, bringing the star to the red giant and red supergiant evolutionary phases that characterize the later stages. The outer atmospheres of such stars are often cool enough to allow the condensation of some of the heavy elements into solid particles. Dust grains composed of a rocky silicate are probably the most common outcome, but graphite or silicon carbide grains are possibilities in carbon-rich stars. In any case, because the envelope of the star is so extended, the surface gravity is too weak to hold the atmospheric mix of gas and dust, and this mixture blows out of the star as a prodigious stellar wind. Objects in this state are called planetary nebulas. The observed loss of matter occurs at a rate rapid enough to strip off the entire envelope, revealing eventually a white-hot core that is now a bare white dwarf. Since the mass loss reduced the stellar mass below the Chandrasekhar limit, the core never progressed to very advanced stages of nuclear fusion, giving the most common white dwarfs in the Galaxy a likely composition of carbon and oxygen.

Interstellar clouds
      Observations conducted at radio, infrared, and optical wavelengths show that the majority of stars are formed from giant clouds of gas and dust that exist in interstellar space. There are three basic varieties of clouds that astronomers distinguish on the basis of the dominant physical state in which the hydrogen gas is found: atomic, molecular, or ionized. Hydrogen is singled out in the classification scheme because of its preeminent abundance in the Cosmos.

      Atomic hydrogen clouds are the most widely distributed in interstellar space and, together with molecular hydrogen clouds, contain most of the gaseous and particulate matter of interstellar space. Molecular hydrogen clouds contain a wide range of molecules besides the hydrogen molecule H2 and for that reason are simply called molecular clouds. Ionized hydrogen clouds, called H II regions by astronomers, are fluorescent masses of gas, such as the famous Orion Nebula, which have been lit up by hot blue stars recently born from the neutral gas, the hydrogen becoming dissociated and ionized because of the copious outpouring of ultraviolet photons from such massive stars.

      Dust particles are suspended in all three types of clouds, and their effects can be seen in the absorption and scattering of optical light or in the thermal emission of infrared radiation. The refractory cores of the dust grains were probably expelled from the atmospheres of countless red giant stars, although icy mantles may be acquired in molecular clouds by the adhesion of molecules to the cold grain surfaces when they collide. It has been estimated that dust grains typically account for 1 percent of the mass of an interstellar cloud. Because the internal constitution of dust is primarily elements heavier than hydrogen and helium and because the cosmic mass fraction of all such elements is only a few percent of the total, dust grains must contain a significant fraction of the total cosmic abundance of heavy elements. This deduction is in accord with the observational finding that many heavy elements are severely underrepresented in the gas phase of interstellar clouds. They presumably have condensed out as solid particles.

      Of greatest interest to the present discussion are the molecular clouds, because it is from giant complexes of such clouds that most stars are formed. Radiative cooling by the molecules and dust in them keeps the matter at very low average temperatures, about 10 K, and at relatively high densities as compared with atomic hydrogen clouds. These two circumstances, combined with the large mass (105 or 106 solar masses) of a typical giant molecular cloud complex, make molecular clouds ideal sites for star formation because, even with dimensions spanning hundreds of light-years, they are held together by their self-gravitation. Once a gaseous astronomical body becomes self-gravitating, the formation of still more condensed states—in this case, stars—is almost inevitable.

Star formation
      Detailed radio maps of nearby molecular clouds reveal that they are clumpy, with regions containing a wide range of densities—from a few tens of molecules (mostly hydrogen) per cubic centimetre to more than one million. Stars form only from the densest regions, termed cloud cores, though they need not lie at the geometric centre of the cloud. Large cores (which probably contain subcondensations) up to a few light-years in size seem to give rise to unbound associations of very massive stars (called OB associations after the spectral type of their most prominent members, O and B stars) or to bound clusters of less massive stars. Whether a stellar group materializes as an association or a cluster seems to depend on the efficiency of star formation. If only a small fraction of the matter goes into making stars, the rest being blown away in winds or expanding H II regions, then the remaining stars end up in a gravitationally unbound association, dispersed in a single crossing time (diameter divided by velocity) by the random motions of the formed stars. On the other hand, if 30 percent or more of the mass of the cloud core goes into making stars, then the formed stars will remain bound to one another, and the ejection of stars by random gravitational encounters between cluster members will take many crossing times.

      Low-mass stars also are formed in associations called T associations after the prototypical stars found in such groups, T Tauri stars. The stars of a T association form from loose aggregates of small molecular cloud cores a few tenths of a light-year in size that are randomly distributed through a larger region of lower average density. The formation of stars in associations is the most common outcome; bound clusters account for only about 1 to 10 percent of all star births. The overall efficiency of star formation in associations is quite small. Typically less than 1 percent of the mass of a molecular cloud becomes stars in one crossing time of the molecular cloud (about 5 × 106 years). Low efficiency of star formation presumably explains why any interstellar gas remains in the Galaxy after 1010 years of evolution. Star formation at the present time must be a mere trickle of the torrent that occurred when the Galaxy was young.

      A typical cloud core rotates fairly slowly, and its distribution of mass is strongly concentrated toward the centre. The slow rotation rate is probably attributable to the braking action of magnetic fields that thread through the core and its envelope. This magnetic braking forces the core to rotate at nearly the same angular speed as the envelope as long as the core does not go into dynamic collapse. Such braking is an important process because it assures a source of matter of relatively low angular momentum (by the standards of the interstellar medium) for the formation of stars and planetary systems. It also has been proposed that magnetic fields play an important role in the very separation of the cores from their envelopes. The proposal involves the slippage of the neutral component of a lightly ionized gas under the action of the self-gravity of the matter past the charged particles suspended in a background magnetic field. This slow slippage would provide the theoretical explanation for the observed low overall efficiency of star formation in molecular clouds.

      At some point in the course of the evolution of a molecular cloud, one or more of its cores become unstable and subject to gravitational collapse. Good arguments exist that the central regions should collapse first, producing a condensed protostar whose contraction is halted by the large buildup of thermal pressure when radiation can no longer escape from the interior to keep the (now opaque) body relatively cool. The protostar, which initially has a mass not much larger than Jupiter, continues to grow by accretion as more and more overlying material falls on top of it. The infall shock, at the surfaces of the protostar and the swirling nebular disk surrounding it, arrests the inflow, creating an intense radiation field that tries to work its way out of the infalling envelope of gas and dust. The photons, having optical wavelengths, are degraded into longer wavelengths by dust absorption and reemission, so that the protostar is apparent to a distant observer only as an infrared object. Provided that proper account is taken of the effects of rotation and magnetic field, this theoretical picture correlates with the radiative spectra emitted by many candidate protostars discovered near the centres of molecular cloud cores.

      An interesting speculation concerning the mechanism that ends the infall phase exists: it notes that the inflow process cannot run to completion. Since molecular clouds as a whole contain much more mass than what goes into each generation of stars, the depletion of the available raw material is not what stops the accretion flow. A rather different picture is revealed by observations at radio, optical, and X-ray wavelengths. All newly born stars are highly active, blowing powerful winds that clear the surrounding regions of the infalling gas and dust. It is apparently this wind that reverses the accretion flow.

      The geometric form taken by the outflow is intriguing. Jets of matter seem to squirt in opposite directions along the rotational poles of the star (or disk) that sweep up the ambient matter in two lobes of outwardly moving molecular gas—the so-called bipolar flows. Such jets and bipolar flows are doubly interesting because their counterparts were discovered some time earlier on a fantastically larger scale in the double-lobed forms of extragalactic radio sources (see below Quasars and related objects (Cosmos)).

      The underlying energy source that drives the outflow is unknown. Promising mechanisms invoke tapping the rotational energy stored in either the newly formed star or the inner parts of its nebular disk. There exist theories suggesting that strong magnetic fields coupled with rapid rotation act as whirling rotary blades to fling out the nearby gas. Eventual collimation of the outflow toward the rotation axes appears to be a generic feature of many proposed models.

      Pre-main-sequence stars of low mass first appear as visible objects, T Tauri stars, with sizes that are several times their ultimate main-sequence sizes. They subsequently contract on a time scale of tens of millions of years, the main source of radiant energy in this phase being the release of gravitational energy. When their central temperatures reach values comparable to 107 K, hydrogen fusion ignites in their cores, and they settle down to long stable lives on the main sequence. The early evolution of high-mass stars is similar; the only difference is that their faster overall evolution may allow them to reach the main sequence while they are still enshrouded in the cocoon of gas and dust from which they formed.

Galaxies
      Astronomers have found that most of the matter in the universe is concentrated in galaxies. Paradoxically, they also have discovered from studying galaxies that the universe may contain large quantities of mass that does not emit any light. There are some hints that this hidden mass, or dark matter, may not even be in the form of ordinary material. The discrepancy between the mass that can be seen in galaxies and the mass needed to account for their gravitational binding has become one of the foremost unsolved problems in modern astrophysics.

The Milky Way Galaxy
      Any discussion of galaxies should begin with the local system, where the wealth of information is greatest. The Galaxy contains three main structural components: (1) a thin flat disk of stars, gas, and dust, (2) a spheroidal central bulge containing only stars, and (3) a quasi-spherical halo of old stars. The Sun is found in the first component, while globular clusters are found in the third. The nucleus of the Galaxy lies at the centre of all three components, but it cannot be seen optically from the solar system because of the thick tracts of dust that lie in the disk between it and the galactic centre, obscuring the view. The nucleus can be probed at radio, infrared, X-ray, and gamma-ray wavelengths; a description of these findings is provided below in a more general discussion of the activity witnessed in galactic nuclei.

      A hint of the processes of the formation and evolution of the Galaxy is contained in the general correlation between the spatial location of a star in the galactic system and its heavy-element abundance. The stars found in the disk of the Galaxy are mostly Population I stars; those in the halo are of the Population II type; and those in the bulge are a mixture of the two. This correlation was first noticed in the 1940s by the American astronomer Walter Baade from his investigation of the Andromeda galaxy. Since the theory of nucleosynthesis states that the abundance of heavy elements in successive generations of stars should increase with age, it can be deduced that star formation in the halo terminated long ago, while it has continued in the disk to the present day.

      The shapes acquired by the different stellar components can be understood in terms of the orbital characteristics of the different stellar populations. For Population I stars, the motion corresponds nearly to circular orbits in a single plane; the random velocities above the circular component are small, accounting for the flattened shape of the galactic disk. For Population II stars, the noncircular velocities are much larger; the stars orbit randomly about the Galaxy like a swarm of bees around a hive, accounting for the spheroidal shapes of the galactic bulge and halo.

      In 1962 Olin Eggen of Australia, Donald Lynden-Bell of England, and Allan Sandage of the United States pieced together the chemical and kinematic lines of evidence to argue that the Galaxy must have originated through the coherent dynamic collapse of a single large gas cloud, in which the stars of the halo condensed quickly (within about 2 × 108 years) from the gas, to be followed by the formation of the bulge and disk. Subsequent discoveries that the globular clusters of the halo have a spread of heavy-element abundances and probable ages and that some stars in the bulge are as old or older than the oldest stars in the halo have cast doubt on this simple view. An alternative scenario pictures the Galaxy to have built up relatively slowly over a period of a few times 109 years through the agglomeration of smaller galactic fragments. Some astronomers believe that a “thick-disk” component reported for the Milky Way system and other galaxies arise by this process, but too great a thickening of the layer of stars in the disk may result if the captured companions have more than about 10 percent of the Galaxy's mass.

      Although the velocities of the stars within a few thousand light-years of the Sun in the direction perpendicular to the galactic plane are generally small, they are not zero. By investigating the statistics of these motions and the vertical structure of the disk, it is possible to deduce the vertical component of the gravitational field of the Galaxy and thereby the total mass of material required locally to supply the observed gravity. The quantity of required material is called Oort's limit (after the aforementioned J.H. Oort), and it exceeds by a factor of about two the quantity of available material, as observed in the form of known stars and gas clouds. This result constitutes the closest example of a general discrepancy arising on galactic scales whenever dynamically derived masses are compared with direct counts of observationally accessible objects. The missing matter in Oort's limit refers, however, to a flattened population and may differ in ultimate resolution from the more general dark-matter problem (see below), which is associated with the halos of galaxies and beyond.

      From star counts, one can derive another quantity of astronomical interest, the mean brightness (per unit area) in the solar neighbourhood. If one divides this quantity into the mass (per unit area) corresponding to Oort's limit, one obtains the local mass-to-light ratio, which astronomers have measured to be about five in solar units. In other words, the gravitating mass in the Galaxy has a mean efficiency for producing light that is five times less than the Sun's. This implies, first, that the average star must be less massive than the Sun and, second, that the amount of helium presently inside stars—in contrast with the heavier elements—cannot have been produced by stellar processes. The reason is simple. The Sun, with a mass-to-light ratio of unity, will manage to convert about 10 percent of its mass (in the core) into helium in 1010 years (after which it leaves the main sequence); matter with a mean mass-to-light ratio of five, therefore, would convert only 2 percent of its mass to helium in 1010 years, roughly the age of both the Galaxy and the universe. The cosmic abundance of helium is approximately 26 or 27 percent of the total mass; thus, unless the Galaxy was much brighter in the past than it is today (for which there is no observational evidence), the bulk of the helium in the universe must have been created by nonstellar processes. Astronomers now believe that a primordial abundance of helium of about 24 percent by mass emerged from the big bang. Among other arguments, this is the value derived from the analyses of the chemical compositions of H II regions in external galaxies where the heavy-element abundance is very low and where, therefore, nuclear processing by stars has presumably been small.

      It is possible, of course, to examine the statistics of the random velocities of stars in the two directions parallel to the galactic plane as well as in the vertical direction. The Swedish astronomer Bertil Lindblad was the first to carry out such an analysis. His work, combined with Oort's study in 1927 of the constants of the differential rotation of the Galaxy, gave the period of revolution of stars such as the Sun about the galactic centre. The modern value for this period equals about 2.5 × 108 years. With Shapley's measurement of the distance to the galactic centre and with the assumption that stars like the Sun circle the Galaxy because they are gravitationally bound to it, it is possible to estimate the total mass interior to the solar distance from the galactic centre. Modern estimates yield roughly 2 × 1011 solar masses. Since the Sun is somewhat more massive than the typical star, the Galaxy must contain more than 1011 stars.

      Detailed information can be gleaned about the distribution of mass in the Galaxy if one possesses a knowledge of the rotational speeds of disk matter at other radial locations in the Galaxy. The most common measurements are of atomic hydrogen in its spin-flip transition at 21-centimetre wavelength and of the carbon monoxide molecule in one or another of its rotational transitions at millimetre wavelengths. These observations also provide data concerning the total amount of atomic and molecular hydrogen gas contained in the Galaxy. To convert the carbon monoxide abundance to a molecular hydrogen abundance (which cannot be measured directly except at ultraviolet wavelengths that suffer tremendous dust extinction) requires a complicated series of calibrations of nearby sources. The mass of gas in the Galaxy is a few times 109 solar masses, about evenly divided between atomic and molecular hydrogen clouds. Most of the observed mass of the Galaxy is in the form of stars; gas and dust make up only a few percent of the total.

      By a combination of such measurements, astronomers can obtain the rotation curve of the Galaxy from its innermost regions to a radial distance of almost 60,000 light-years from the galactic centre. This rotation curve implies that the mass of the Galaxy measured out to a certain distance r does not converge to a fixed value as r increases but continues to rise roughly in linear proportion to r. The mass contained interior to the most distant radius measured amounts to about 5 × 1011 solar masses. Observations indicate, however, that the integrated light from a galaxy like the Milky Way system does not increase similarly with increasing r but approaches asymptotically a finite value. Thus, the local mass-to-light ratio of the Galaxy, like those of other spiral galaxies, must increase dramatically toward its outer parts where the halo dominates. Another way to state the problem is that the observed rotational velocities of gas clouds in the outer parts of spiral galaxies are so large that they would not be bound to the galaxies unless the galaxies were more massive than inferred from direct measurements of their stellar and gas contents. Most astronomers now accept the likelihood of dark halos that contain as much mass as is present in the visible disks and bulges; more controversial are the claims that these halos may increase known galactic masses by factors of 10 or 100.

Classification of galaxies
      Astronomers judge galaxies in accordance with three criteria: morphological appearance, stellar content, and overall luminosity (see galaxy: Types of galaxies (galaxy)). Although the number of galaxies found in the universe is enormous, Edwin P. Hubble discovered that a few basic categories specify their observed shapes. Galaxies that have irregular shapes are called irregulars, denoted Irr. Irregulars are subdivided into two categories: Irr I and Irr II. Irr I galaxies have OB stars and H II regions; examples of such systems are the Large and Small Magellanic Clouds. Irr II galaxies are amorphous in texture and show no resolution into bright stars or associations, but they do contain much neutral gas and are probably forming massive numbers of stars as attested to by their blue colours. Galaxies that have regular forms are divided into two broad groups: ellipticals and disks. Elliptical galaxies, denoted E, have roundish shapes. Disk galaxies, on the other hand, have flattened shapes. They can be further divided into two subcategories: ordinary spirals, denoted S, and barred spirals, denoted SB. In addition, there exists a transition type between ellipticals and spirals, which are often called lenticulars. The lenticular galaxies are designated either S0 or SB0, depending on the absence or presence of a bar of stars, gas, and dust through the nucleus.

      Ellipticals and spirals constitute the two largest reservoirs of the stars in the universe, and the placement of individual galaxies into these two major categories is refined by adding a numeral 1 through 7 or a letter “a” through “c” to their designation. The sequence E0 to E7 denotes one of increasing flattening (as seen in projection in the sky). The sequence Sa to Sc, or SBa to SBc, represents decreasing tightness of winding of the spiral arms and decreasing size of the central bulge relative to the disk.

      A useful analogy with stars is the introduction by Sidney van den Bergh of Canada of the concept of luminosity class. The scheme appends to the Hubble type a luminosity-class label, from Roman numeral I for the intrinsically brightest (and most massive) spiral galaxies to Roman numeral V for the intrinsically faintest (and least massive) spirals. The utility of this scheme, as applied to spirals, rests with the fact that it is possible to assign them a luminosity class without actually measuring their distance (to obtain an absolute brightness from an observed apparent brightness). The luminosity class of a spiral galaxy correlates well with the regularity (or “prettiness”) of the spiral structure: in class I galaxies the arms are long and well developed and have a high surface brightness; in class III they are patchy and fuzzy; and in class V there may be barely a hint of a spiral structure. Elliptical galaxies, lacking spiral arms, cannot have their absolute brightnesses estimated by the same morphological considerations; hence, the concept of a luminosity class for them is less empirically useful. When the masses of elliptical galaxies at known distances are deduced from measured velocities or apparent luminosities, they range from a few million solar masses (dwarf ellipticals) to more than 1012 solar masses (giant ellipticals). Thus, giant ellipticals and giant spirals have comparable masses. Yet, it should be noted that the very largest elliptical galaxies in the universe, the supergiant cD systems, are unique and perhaps have masses approaching 1014 solar masses in some extreme cases.

Dynamics of ellipticals and spirals
      The motions of stars in an external galaxy can be studied in a statistical sense by examining the Doppler shifts of the optical absorption lines in the integrated light along the line of sight through different parts of the object. Radio-spectroscopic observations can give similar information concerning the gaseous components of the system. Some important results from these studies are as follows.

      The dominant motion in the disks of normal spiral galaxies is differential galactic rotation, with the random motions of stars being relatively small and that of the atomic and molecular gas smaller still. A surprising result is that the rotation curves of almost all well-studied spiral galaxies become flat at large radial distances. As one goes out from the centre, the rotational velocity rises to a constant value V and then maintains it for as far as one can make the measurements. This implies, as already noted for the Milky Way Galaxy, that the mass contained within r increases linearly with increasing r and provides the firmest piece of evidence in support of the hypothesis that large amounts of dark matter may be present in the halos of spiral galaxies.

      The qualitative fact of disk galaxies rotating differentially, with the inner parts having shorter rotational periods than the outer parts, has been known since Lindblad's and Oort's investigations of the problem for the Milky Way system in the 1920s. This fact, combined with age estimates for all galaxies of about 1010 years, presents a dilemma for the origin of spiral structure. If spiral arms are viewed as consisting always of the same material (e.g., the same gas clouds that give birth to the brilliant OB stars and H II regions that best define the optical spiral structure), then the arms should wind up. In particular, with a flat rotation curve, material at half the solar distance from the galactic centre should go around twice for each revolution of the material at the solar distance, and an extra turn should then be added to each spiral arm between these two radii every 2.5 × 108 years. This would give the spiral arms of the Galaxy (and other spiral galaxies like it) several dozens of turns over the lifetime of the Galaxy, whereas spiral galaxies have in fact never been observed with more than one or two turns.

      A way out of the winding dilemma is the proposal that spiral structure is a wave phenomenon, the spiral arms being a local “piling-up” of stars and gas clouds that individually flow through the spiral pattern, much as a traffic jam is a local piling-up of cars and trucks that individually flow through the jam. The piling-up arises because the self-gravity of the excess matter in the arms causes deflections of what would otherwise be circular orbits (on average), the deflections self-consistently producing the original pileup. Most astronomers are agreed that density waves underlie the phenomenon of spiral structure in the so-called grand-design galaxies. More controversial is whether some other mechanism (e.g., “stochastic star formation”) might play a role in galaxies where the spiral structure is “flocculent.”

      In modern density-wave theory, as developed by the American mathematician Chia-chiao Lin and his associates, spiral structure represents an unstable mode of collective oscillation. The instability provides a way by which a differentially rotating disk galaxy may release free energy of differential rotation and spontaneously generate spiral waves. The balance of the growth of these waves against their dissipation (through the response of the interstellar gas clouds) may yield a quasi-stationary state whereby gaseous matter slowly drifts to the interior and angular momentum is steadily transported to the exterior. Although the details of the entire picture remain incomplete, many of the basic predictions—as, for example, that the perturbations in density and velocity should be strongest in the component with the smallest random velocities (i.e., gas and dust clouds)—have already been confirmed both qualitatively and quantitatively in several well-observed spiral galaxies. Furthermore, the Hubble correlation between the tightness of spiral windings and the size of the central bulge relative to the disk, as well as the van den Bergh correlation between luminosity class and the degree of organization of the spiral structure, are simple direct consequences of density-wave theory.

      A similar explanation probably underlies the barred spiral galaxies, with the basic underlying disturbance being an oval distortion. The predicted departures from circular motions are larger in barred spirals than in ordinary spirals, and this seems to be consistent with the observational evidence that currently exists for this problem. The enhanced rates at which matter is brought to the centres of such galaxies may have implications for various energetic events that take place in some galactic nuclei. It has even been proposed on the basis of observed peculiarities of gas motions and various infrared images that the central regions of the Milky Way Galaxy may contain a small bar.

      In elliptical galaxies, the constituent stars have random velocities that are generally much larger than the rotational motions. This explains why ellipticals possess neither thin disks nor spiral arms. Moreover, giant ellipticals are flatter than would be inferred from the amount of rotation that they do possess, and increasing rotation does not necessarily lead to increasing flattening, as appears to happen, for example, to ellipticals of lower luminosity and the bulges of spiral galaxies. Also, most ellipticals do not appear to have young stars, probably because the small measurable amounts of gas and dust that exist in them cannot support an active rate of star formation.

      Mathematical analysis and computer simulations since the early 1970s suggest a possible stellar-dynamic basis for understanding the basic shapes of giant elliptical galaxies. Unlike the bulges and disks of S0 galaxies, the bodies of giant ellipticals may not be figures of revolution (e.g., oblate spheroids) but may possess three axes of unequal lengths. In the models, the triaxial shape arises because the random velocities of the stars are anisotropic (not equal in all directions). Such a state of affairs seems consistent with the existing observational data, in particular the finding in several ellipticals that significant rotation exists around the longest apparent axis. A healthy fraction of nearby ellipticals, moreover, show rapidly rotating cores, which may represent the remains of captured dwarf galaxies that have spiraled to the centres of their larger hosts.

      An interesting empirical property shared by both ellipticals and spirals is that their luminosities L seem to be proportional to the fourth power of their random or circular velocities V. The proportionality constant can be calibrated with the help of nearby (giant) galaxies, and the resulting relation may then be used for cosmological investigations. In particular, the determination of distances is a recurring astronomical problem, and the relation, L proportional to V4, provides a method for obtaining distances. In brief, a measurement of V allows the determination of L, which, combined with the observed apparent brightness, gives the distance of the object.

Interacting galaxies
 Strongly interacting pairs of galaxies make up less than 1 percent of all galaxies, but the more spectacular examples produce intriguing structures (bridges, tails, rings, and shells) and involve processes (stripping, merging, and sinking) that are not present in individual isolated galaxies. Computer simulations of the gravitational encounter between a large disk galaxy and a small one show that the latter can pull material from the near side of the former into a bridge that temporarily spans the gulf between the two. Encounters between two more nearly equal participants can yield one long tail from each disk galaxy, which extends away from the main bodies (Figure 1—>). Rings emerge in the disk of a galaxy if another massive galaxy passes through its body; the brief inward pull and subsequent rebound cause the orbits of the rings to pile together like ripples on the surface of a pond into which a stone is dropped. Shells form across the face of a large elliptical galaxy if it devours a small companion; the stars of the small galaxy are strewn like wine out of a rolling barrel with the stopper removed.

      Stripping (of matter), merging (of the main bodies of the galaxies), and sinking (of the satellite galaxy toward the centre of the host) are all represented in the above example, and these processes, individually and collectively, have been invoked by theorists in a wide variety of contexts and by a wide variety of names to explain different observed galactic phenomena. The most interesting application is perhaps to the origin of elliptical galaxies.

      It has been proposed by the American astronomer-mathematician Alar Toomre that elliptical galaxies result from the merger of spiral galaxies, jumbled piles of stars from the wreckage of collisions of bound pairs of galaxies with arbitrarily oriented spins and orbits. A potential difficulty with the original theory was the fate of the interstellar gas and dust. Considerable evidence has since accumulated (i.e., with the launch in 1983 of the Infrared Astronomical Satellite [IRAS]) to show that tidal interactions and galactic mergers can induce strong bursts of star formation that use up the interstellar material at rates up to 100 times faster than in normal galaxies (see below). An extension of similar ideas suggests that the supergiant ellipticals, the cD galaxies that tend to lie at or near the centres (or density maxima) of rich clusters of galaxies, grew bloated by “cannibalizing” their smaller neighbours.

Galaxy formation
      Some years ago, astronomers thought that galaxies formed at a time when the universe was a few times 108 years of age, since this is also the time matter takes to cross a typical galaxy by coherent dynamic collapse of a large gas cloud at free-fall speeds. In the process of so contracting, neighbouring protogalaxies would exert gravitational torques on each other, imparting amounts of angular momenta comparable to that possessed by galaxies today. The bodies would therefore flatten in the subsequent collapse.

      Material that reached a completely flattened state while still in a gaseous state would have its vertical component of motion arrested in a strong shock wave and form the disk of a galaxy. Material that formed dense stars or protostars on the way down would be able to pass through the disk virtually unimpeded and, after several bounces, would settle to form the bulge and halo of a disk galaxy like the Milky Way system. It was also thought that a slight modification could produce elliptical galaxies—namely, if the efficiency of star formation were so high during the collapse phase that virtually all the matter turned into stars before flattening into a disk, then a single quasi-spherical stellar component might result. Given the developments since the 1970s described above, however, serious doubts have been raised against this scenario. The spread in ages and heavy-element abundances of halo and bulge stars in the Milky Way Galaxy, the anisotropic distribution of stellar velocities in elliptical galaxies, and the statistics of starburst galaxies and interacting galaxies all argue for the importance of galactic mergers (perhaps involving predominantly dwarf systems) in the buildup of giant galaxies.

      There also exists observational evidence that galaxies existent at a time corresponding to a redshift of three or four have properties quite different from those that exist today at redshifts near zero (see Cosmological models (Cosmos) below). High-redshift galaxies can be found in association with strong extragalactic radio sources (see below Quasars and related objects (Cosmos)), and, when such galaxies are imaged optically, they often show complex lumpy structures suggestive of recent mergers and interactions. A similar result applies when distant galaxies were imaged in a random fashion by the Hubble Space Telescope, the Earth-orbiting observational system launched in 1990.

      It remains uncertain, however, when the first stars in any galactic-sized lump formed. Infrared studies demonstrate that well-developed stellar populations already exist in galaxies with redshifts of a few and perhaps even 5 or 10. The observational discovery of a genuine primeval galaxy would remove many uncertainties. In a collapse environment involving only hydrogen and helium gas, the primary diagnostic would be the copious emission of Lyman-alpha radiation (corresponding to the transition between the first excited state and the ground state of atomic hydrogen). The rest wavelength of this transition lies in the ultraviolet, but in primeval galaxies the cosmological redshift would make the observed wavelength longer (i.e., toward the red end of the spectrum). From this point of view, it is interesting that searches near known quasars have uncovered Lyman-alpha-emitting galaxies with redshifts exceeding three.

      There exists a body of opinion that the stars of a primeval galaxy will generate dust at such a rapid rate that all intrinsic Lyman-alpha production by the galaxy will be degraded to thermal infrared-continuum radiation. In this case, primeval galaxies may resemble the “starburst galaxies” that were discovered by IRAS. In contrast to normal galaxies like the Milky Way system where the ratio of infrared to visible luminosities is about unity, these sources can emit up to 100 times more infrared radiation than visible light. The only viable explanation for the infrared excess is that these galaxies are somehow undergoing enormous bursts of star formation. Ground-based observations that followed up the IRAS discovery showed that the activity in starburst galaxies is often confined to the central portions of the systems and that many of the candidate sources correspond either to interacting galaxies or to barred spirals. This suggests that starbursts may be triggered by the gravitational perturbations that have brought large amounts of molecular gas to the central regions of the galaxy. A similar burst of star formation might be expected to occur in an era when the matter of a galaxy was nearly all gas rather than all stars. Since astronomers have not found any general evidence for such large-scale energetic events, it becomes plausible to contemplate the formation of giant galaxies as a more protracted process (through the mergers of many dwarf systems), extending possibly even to the present epoch.

Quasars and related objects
      Galaxies are where astronomers find stars, the major transformers of matter into energy in the universe. Paradoxically, it is also from the study of galaxies that astronomers first learned that there exist in the universe sources of energy individually much more powerful than stars. These sources are radio galaxies and quasars, and their discovery in the 1950s and '60s led to the establishment of a new branch of astronomy, high-energy astrophysics.

Extragalactic radio sources
      Sources that emit a continuum of radio wavelengths and that lie beyond the confines of the Galaxy were divided in the 1950s into two classes depending on whether they present spatially extended or essentially “starlike” images. Radio galaxies belong to the former class, and quasars (short for “quasi-stellar radio sources”) to the latter. The distinction is somewhat arbitrary, because the ability to distinguish spatial features in cosmic radio sources has improved steadily and dramatically over the years, owing to Sir Martin Ryle's introduction of arrays of telescopes, which use aperture-synthesis techniques to enhance the angular resolution of a single telescope. Apart from the smaller angular extent that arises from being at a greater distance, many objects originally classified as quasars are now known to have radio structures that make them indistinguishable from radio galaxies. Not every quasar, however, is a radio galaxy. For every radio-loud quasar, there exist 20 objects having the same optical appearance but not the radio emission. These radio-quiet objects are called QSOs for quasi-stellar objects. Henceforth, the term quasars will be used to refer to both quasars and QSOs when the matter of radio emission is not under discussion.

      The most powerful extragalactic sources of radio waves are double-lobed sources (or “dumbbells”) in which two large regions of radio emission are situated in a line on diametrically opposite sides of an optical galaxy. The parent galaxy is usually a giant elliptical, sometimes with evidence of recent interaction. The classic example is Cygnus A, the strongest radio source in the direction of the constellation Cygnus. Cygnus A was once thought to be two galaxies of comparable size in collision, but more recent ideas suggest that it is a giant elliptical whose body is bifurcated by a dust lane from a spiral galaxy that it recently swallowed. The collisional hypothesis in its original form was abandoned because of the enormous energies found to be needed to explain the radio emission.

      The radio waves coming from double-lobed sources are undoubtedly synchrotron radiation, produced when relativistic electrons (those traveling at nearly the speed of light) emit a quasi-continuous spectrum as they gyrate wildly in magnetic fields. The typical spectrum of the observed radio waves decreases as a power of increasing frequency, which is conventionally interpreted, by analogy with the situation known to hold for the Galaxy in terms of radiation by cosmic-ray electrons, with a decreasing power-law distribution of energies. The radio waves typically also show high degrees of linear polarization, another characteristic of synchrotron radiation in well-ordered magnetic fields.

      A given amount of received synchrotron radiation can be explained in principle by a variety of assumed conditions. For example, a high energy content in particles (relativistic electrons) combined with a low content in magnetic fields will give the same radio luminosity as a low energy content in particles combined with a high content in magnetic fields. The American astrophysicist Geoffrey R. Burbidge showed that a minimum value for the sum results if one assumes that the energy contents of particles and fields are comparable. The minimum total energy computed in this way for Cygnus A (whose distance could be estimated from the optical properties of the parent galaxy) proved to be between 1060 and 1061 ergs.

 A clue to the nature of the underlying source of power came from aperture-synthesis studies of the fine structure of double-lobed radio galaxies. It was found that many such sources possess radio jets that point from the nuclei of the parent galaxies to the radio lobes. It is now believed, largely because of the work of Sir Martin Rees and Roger Blandford, that the nucleus of an active galaxy supplies the basic energy that powers the radio emission, the energy being transported to the two lobes by twin beams of relativistic particles. Support for this theoretical picture exists, for example, in VLA maps (those made by the Very Large Array of radio telescopes near Socorro, N.M., U.S.) of Cygnus A that show two jets emerging from the nucleus of the central galaxy and impacting the lobes at “hot spots” of enhanced emission. Other examples of this type are known, as are “head-tail” sources such as NGC 1265 where the motion of an active galaxy through the hot gas that exists in a cluster of galaxies has apparently swept back the jets and lobes in a characteristic U shape.

      Many jets are one-sided; i.e., only one of the postulated twin jets is actually observed. This is usually interpreted to mean that the material in some jets moves relativistically (at speeds approaching that of light). Relativistic effects—e.g., the Doppler shift of the emitted photons—then boost the intrinsic luminosity of the jet pointing toward the observer and lower that of the counterjet, allowing measurements of limited dynamic range to detect only the former.

      Support for the interpretation of relativistic jets exists in the phenomenon of “superluminal expansion.” In very long baseline interferometry (VLBI) experiments performed by combining the simultaneous observations of several telescopes spaced by thousands of kilometres, radio astronomers have discovered that some of the compact radio sources located in the nuclei of active galaxies break into several components at high angular resolution. Moreover, in the course of a few years, the components move with respect to each other along a line projected against the sky that points toward more extended structures known from other observations (e.g., large jets or lobes). If the source is placed at a (cosmological) distance appropriate for the redshift of the optical object, the projected motion across the line of sight has an apparent velocity that exceeds the speed of light. For example, in 3C 273, which possesses an optical jet in addition to the radio features discussed here, the apparent velocity measured over a time span from mid-1977 to mid-1980 amounted to about 10 times the speed of light.

      Clearly, if Einstein's theory of special relativity is correct and if the assumed distance of the object is justified, then the computed “velocity” cannot represent the actual velocity of ejected collections of particles. The explanation now accepted by most astronomers is the model of a relativistic beam directed at a small angle to the observer along the line of sight. In this model a particle moving close to the speed of light would, according to a distant observer, almost catch up with the photons it emits, so that the duration of time that elapses between an earlier emission event and a later one is systematically underestimated by the observer (compared with one moving with the beam). Thus, under the appropriate circumstances, the apparent velocity (distance across the line of sight divided by apparent elapsed time) can exceed the actual velocity by a large factor. A beam moving at an actual velocity 99.5 percent the speed of light along an angle that lies 6° from the line of sight, for example, will seem to move across the line of sight at an apparent velocity of 10 times the speed of light.

Quasars
      The source 3C 273 mentioned above is officially classified by astronomers as a quasar. Quasars were first detected as unresolved sources in surveys conducted during the 1950s by radio astronomers in Cambridge, Eng. Optical photographs subsequently taken of their spectra showed locations for emission lines at wavelengths that were at odds with all celestial sources then familiar to astronomers. The puzzle was solved by the American astronomer Maarten Schmidt, who announced in 1963 that the pattern of emission lines in 3C 273 could be understood as coming from hydrogen atoms that had a redshift of 0.158. In other words, the wavelength of each line was 1.158 times longer than the wavelength measured in the laboratory where the source is at rest with respect to the observer. (The general formula is that, if the factor is 1 + z, astronomers say the astronomical source has a redshift of z. If z turns out to be negative [i.e., if 1 + z is less than 1], the source is said to be “blueshifted.”)

      Schmidt's discovery raised immediate excitement, since 3C 273 had a redshift whose magnitude had been seen theretofore only among the most distant galaxies. Yet it had a starlike appearance, with an apparent brightness (but not a spectrum) in visible light not very different from that of a galactic star at a distance of a few thousand light-years. If the quasar lay at a distance appropriate to distant galaxies a few times 109 light-years away, then the quasar must be 1012 times brighter than an ordinary star. Similar conclusions were reached for other examples. Quasars seemed to be intrinsically brighter than even the most luminous galaxies known, yet they presented the pointlike image of a star.

      A hint of the actual physical dimensions of quasars came when sizable variations of total light output were seen from some quasars over a year or two. These variations implied that the dimensions of the regions emitting optical light in quasars must not exceed a light-year or two, since coherent fluctuations cannot be established in any physical object in less time than it takes photons, which move at the fastest possible speed, to travel across the object. These conclusions were reinforced by later satellite measurements that showed that many quasars had even more X-ray emission than optical emission, and the total X-ray intensity could vary in a period of hours. In other words, quasars released energy at a rate exceeding 1012 suns, yet the central machine occupied a region only the size of the solar system.

      Understandably, the implications were too fantastic for many people to accept, and a number of alternative interpretations were attempted. An idea common to several of the alternatives involved the proposal that the redshift of quasars arose from a different (i.e., noncosmological) origin than that accepted for galaxies. In that case, the distance to the quasars could be much less than assumed to estimate the energy outputs, and the requirements might be drastically relaxed. None of the alternative proposals, however, withstood close examination.

 In any case, there now exists ample evidence for the validity of attributing cosmological distances to quasars. The strongest arguments are the following. When the strong nonstellar light from the central quasar is eliminated by mechanical or electronic means, a fuzzy haze can sometimes be detected still surrounding the quasar. When this light is examined carefully, it turns out to have the colour and spectral characteristics appropriate to a normal giant galaxy. This suggests that the quasar phenomenon is related to nuclear activity in an otherwise normal galaxy. In support of this view is the observation that quasars do not really form a unique class of objects. For example, not only are there elliptical galaxies that have radio-emission characteristics similar to those of quasars, but there are weaker radio sources among spiral galaxies (called Seyferts after their discoverer, the American astronomer Carl K. Seyfert), which have bright nuclei that exhibit qualitatively the same kinds of optical emission lines and nonstellar continuum light seen in quasars. There also are elliptical galaxies, N galaxies, and the so-called BL Lac objects, which have nuclei that are exceptionally bright in optical light. Plausible “unification schemes” have been proposed to explain many of these objects as the same intrinsic structure but viewed at different orientations with respect to relativistically beamed jets or with obscuring dust tori surrounding the nuclear regions or both. Finally, a number of quasars—including the closest example, the famous source 3C 273—have been found to lie among clusters of galaxies. When the redshifts of the cluster galaxies are measured, they have redshifts that bracket the quasar's, suggesting that the quasar is located in a galaxy that is itself a cluster member.

Black-hole model for active galactic nuclei
      The fact that the total output from the nucleus of an active galaxy can vary by substantial factors supports the argument that the central machine is a single coherent body. A competing theory, however, holds that the less powerful sources may be understood in terms of multiple supernova explosions in a confined space near the centres of starburst galaxies. Nevertheless, for the most powerful cases, the theoretical candidate of choice is a supermassive black hole that releases energy by the accretion of matter through a viscous disk. The idea is that the rubbing of gas in the shearing layers of a differentially rotating disk would frictionally generate heat, liberating photons as the mass moves inward and the angular momentum is transported outward. Scaled-down versions of the process have been invoked to model the primitive solar nebula and the disks that develop in interacting binary stars.

      The black hole has to be supermassive for its gravitational attraction to overwhelm the strong radiation forces that attempt to push the accreting matter back out. For a luminosity of 1046 erg/sec, which is a typical inferred X-ray value for quasars, the black hole must exceed 108 solar masses. The event horizon of a 108 solar-mass black hole, from inside which even photons would not be able to escape, has a circumference of about two light-hours. Matter orbiting in a circle somewhat outside of the event horizon would be hot enough to emit X rays and have an orbital period of several hours; if this material is lumpy or has a nonaxisymmetric distribution as it disappears into the event horizon, variations of the X-ray output on a time scale of a few hours might naturally be expected.

      To produce 1046 erg/sec, the black hole has to swallow about two solar masses per year if the process is assumed to have an efficiency of about 10 percent for producing energy from accreted mass. The rough estimate that 10 percent of the rest energy of the matter in an accretion disk would be eventually liberated as photons, in accordance with Einstein's formula E = mc2, should be contrasted with a total efficiency of about 1 percent in nuclear reactions if a mass of hydrogen were to be converted entirely into iron. If the large-scale annihilation of matter and antimatter is excluded from consideration, the release of gravitational binding energy when matter settles onto compact objects is the most powerful mechanism for generating energy in the known universe. (Even supernovas use this mechanism, for most of the energy released in the explosion comes from the gravitational binding energy or mass deficit of the remnant neutron star.)

      Interacting and merging galaxies provide the currently preferred routes to supply the matter swirling into the black hole. The direct ingestion of a gas-rich galaxy yields an obvious external source of matter, but the enhanced accretion of the parent galaxy's internal gas through tidal interactions (or bar formation) may suffice in most cases. At lower luminosities, other contributing factors may come from the tidal breakup of stars passing too close to the central black hole or from the mass loss from stars in the central regions of the galaxy. Gathering matter at a rate of two solar masses per year (90 percent of which ends up as the gravitating mass of the black hole) will build up a black hole of 108 solar masses in several tens of millions of years. This estimate for the lifetime of an active galactic nucleus is in approximate accord with the statistics of such objects. This does not imply that supermassive black holes at the centres of galaxies necessarily accumulate from a seed of very small mass by steady accretion. There remain many viable routes for their formation, the study of such processes being in a state of infancy.

Observational tests
      If there are supermassive objects at the centres of elliptical galaxies, gravitational perturbations of the spatial distribution or velocity field of nearby stars may be discernible. For a spherical distribution of stars surrounding a black hole, theoretical calculations indicate that the number of stars per unit volume and the dispersion of random velocities should rise, respectively, as the negative 7/4 power and the negative 1/2 power of the radial distance from the black hole. In other words, rather than gently rounded or flat profiles as the centre is approached, cusps of stellar light and random velocities should be seen, the upturn beginning at a radial distance where the escape velocity from the black hole is comparable to the natural dispersion of random velocities in the central regions of an elliptical galaxy.

      Except for the largest black holes or the nearest galaxies, the region interior to the turnup point is not resolvable by ground-based optical telescopes, because of the blurring effects produced by turbulence in the Earth's atmosphere. Excess central starlight and velocity dispersions have been seen in M87—a giant elliptical with a well-known optical jet emerging from its nucleus, which is located in the Virgo cluster, the nearest large cluster of galaxies. The excesses are consistent with a central black hole of several times 109 solar masses. Atmospheric blurring, however, prevents astronomers from determining whether the upturns represent true cusps or merely shoulders that taper to constant values. Mere shoulders could be explained, without invoking a black hole, by the stars in the central regions of this galaxy having a nonstandard distribution of random velocities.

      A better situation exists for the detection of supermassive black holes in the nuclei of spiral galaxies, since the interpretation of organized rotational motions is simpler than that for disorganized random motions. The Andromeda galaxy has an excess component of light within a few light-years of its centre. High-resolution spectroscopy of this region shows a large velocity width indicative of the presence of a black hole in the nucleus with a mass in excess of 107 solar masses. Similar observations carried out for more distant spiral galaxies have yielded good candidates for supermassive black holes with masses ranging up to 109 solar masses.

      The closest galactic nucleus of all is of course located at the centre of the Milky Way Galaxy. Unfortunately, the nucleus of the system is not observable at the wavelengths of visible light, ultraviolet light, or soft X rays (those of lower energy than hard X rays), because of the heavy absorption by intervening dust. It can be probed by radio, infrared, hard X-ray, and gamma-ray techniques; such studies have revealed many intriguing features.

      The most likely candidate for the nucleus of the Galaxy has long been regarded to be a compact radio-continuum source denoted Sagittarius A*. This synchrotron-radiation source is unique in the Galaxy: it is variable on a time scale of one day, implying that the radio emission arises from a region with dimensions smaller than the solar system; it shows evidence for synchrotron self-absorption, a condition consistent with a region being compactly filled with relativistic particles and fields; and measurements obtained with VLBI indicate that its motion with respect to the centre of the Galaxy is less than 40 km/s (kilometres per second), consistent with a heavy object brought to rest by “dynamic friction” in the deepest part of the Galaxy's potential well. Hard X-ray observations of the galactic central region, however, reveal only low-level emission from a diffuse component and several discrete sources with characteristics similar to coronal emission from luminous young stars. Broadband, near-infrared measurements at a wavelength centred near 2 μm (0.002 mm) show the presence of a dense star cluster. Surprisingly, the maximum concentration of light of the star cluster does not seem to centre on Sagittarius A*, nor does it show the r−74 light cusp expected for the distribution of stars surrounding a massive pointlike object. Perhaps the cluster appears only by chance projection against the radio source.

      Spectroscopic investigations of the molecular and ionized gas yield a more promising interpretation. Molecular gas in a tilted ring within several light-years of the galactic centre exhibits rotational velocities consistent with motion under a central force field of an object having a mass of several million solar masses. Unfortunately, the molecular gas disappears before the centre can be approached very closely; fortunately, its disappearance is compensated by the appearance of ionized gas forming a “mini-spiral” within the central few light-years. One of the three arms of the mini-spiral streams within one light-year of Sagittarius A*. If this streamer is modeled as an infalling parabolic trajectory, a value of 4 × 106 solar masses is obtained for a compact object at the nucleus of the Galaxy. If the Galaxy has a central black hole, this is probably the best estimate of its mass.

      Radio-continuum studies on a scale of hundreds of light-years from the Galaxy's centre show the nucleus to be embedded in an extraordinary set of filamentary arcs that pass perpendicularly through the galactic plane. Magnetic fields 1,000 times stronger than the general galactic field may play a role in defining the filaments, perhaps in a fashion analogous to the eruption of solar prominences. These magnetic fields may also have restrained the unusual massive molecular clouds Sagittarius A and Sagittarius B2 from forming OB stars with the same vigour as their counterparts farther out in the disk. Details such as these can be seen only because the nucleus of the Galaxy is so close (a “mere” 30,000 light-years away). This complexity should serve as a sobering reminder that most theoretical models of the active nuclei of external galaxies must vastly oversimplify the actual state of affairs.

Other components
      Every second of every day, the Earth is bombarded by high-speed particles, electromagnetic radiation, and perhaps gravitational waves of cosmic origin. As has already been discussed, a part of this steady rain is, directly or indirectly, of planetary, stellar, or galactic origin, but another part may be a relict from a time in the universe before there were any planets, stars, or galaxies.

Cosmic rays and magnetic fields
      In the years following the discovery of natural radioactivity by the French physicist Henri Becquerel in 1896, investigators used ionization chambers to detect the presence of the fast charged particles that are produced in the phenomenon. These workers found that low-level ionization events still occurred even when the source of radioactivity was removed. The events persisted with heavy shielding, and in 1912 the American physicist Victor F. Hess found that they increased drastically in intensity if the detecting instruments were carried to high altitudes by balloons. Little difference existed between day and night; thus, the Sun could not be the primary source. The penetrating radiation had to have a cosmic component, and the earliest suggestion was that it was composed of high-energy photons, gamma rays—hence, the name cosmic rays. In 1927 it was shown that the cosmic-ray intensity was higher at the magnetic poles than at the magnetic equator. For the incoming trajectories to be affected by the geometry of the Earth's magnetic field, cosmic rays had to be charged particles.

      It is now known that cosmic rays come with both signs of electric charge and with a wide distribution of energies. About 83 percent of the positively charged component of cosmic rays consists of protons, the nuclei of hydrogen atoms, and about 16 percent of alpha particles, the nuclei of helium atoms. The nuclei of heavier atoms occur roughly in their cosmic abundances except that the light elements lithium, beryllium, and boron—which are quite rare elsewhere in the universe—are vastly overrepresented in the cosmic rays. The negatively charged component consists of mostly electrons at a level of 1 percent of the protons. Positrons also can be found, approximately 10 percent as frequently as electrons. A very small contribution from antiprotons is also known. Cosmic-ray positrons and antiprotons are believed to be by-products of collisions between the nuclei of cosmic rays with the ambient atomic nuclei that exist in interstellar gas clouds. Cosmic gamma rays, which have been detected emanating from the Milky Way and show a strong correlation with the distribution of interstellar gas, are another manifestation of such collisions.

      The cosmic-ray protons that freely enter the solar system, despite the outward sweep of the solar wind and the magnetic fields it carries, have energies that vary from a few times their rest energies to 106 times and more. Thus, these particles must move at speeds approaching the speed of light. In this range the number of particles at energy E varies with E to the negative 2.7 power. A similar decreasing power law seems to hold for cosmic-ray electrons with energies from a few thousand to tens of thousands times their rest energies. Within uncertainties this energy distribution is consistent with the synchrotron-radiation interpretation of the nonthermal radio emission from the Galaxy. At higher energies, there are fewer cosmic-ray electrons than predicted by extrapolation of the power law found at lower energies, and this depletion can be understood on the basis of the large synchrotron-radiation losses suffered by the most energetic electrons.

      Above 107 times the rest energy of the proton, there also are fewer positively charged particles than predicted by the extrapolation of the power law E−2.7; however, synchrotron losses cannot account for this deficiency. A more likely interpretation is that the cosmic-ray nuclei of lower energies are commonly produced and confined to the Galaxy, whereas those with very high energies may have an origin in very exotic or even extragalactic objects. This is consistent with the fact that protons with energies less than 107 times their rest energies would be bent by the interstellar magnetic field to follow spiraling trajectories that would be confined to the thickness of the galactic disk. Nevertheless, these particles can eventually escape from the disk if the magnetic fields buckle out of the galactic plane (as they do because of certain instabilities).

      An estimate of the total residence time of cosmic-ray nuclei within the disk of the Galaxy can be obtained by examining the anomalous abundances of lithium, beryllium, and boron. These elements are only somewhat less abundant in cosmic rays than carbon, nitrogen, and oxygen, and this has been conventionally interpreted to mean that the former group was mostly produced by spallation reactions (breakup of heavier nuclei) of the latter group as the cosmic-ray particles traversed interstellar space and interacted with the matter there. From the amount of spallation that has occurred, it can be estimated that the cosmic rays reside, on average, roughly 107 years among the gas clouds in the galactic disk before escaping.

      The origin of cosmic rays is an incompletely resolved problem. At one time astronomers believed that all cosmic rays, except those at the highest energies, originated with supernova explosions. The total energetics is right, and the presence in cosmic rays of nuclei as heavy as iron, etc., could receive a natural explanation under the supernova hypothesis. Unfortunately, doubt was cast on the hypothesis by later work that questioned, first, whether particles could really be accelerated to cosmic-ray energies in a single supernova shock and, second, whether these particles, even if accelerated, could propagate through the interstellar medium very far from the site of the original explosion. The second objection also applies to other possible point sources, such as pulsars.

      A more promising possibility seems to be the proposal that cosmic rays are accelerated to their high energies by repeated reflections in magnetic shock waves in the interstellar medium (whose ultimate energy may be derived from the ensemble of all supernova explosions). The idea is that gas and the magnetic field threading it move at very different speeds on the two sides of the front of a shock wave. Cosmic-ray particles rattling through magnetic inhomogeneities may be shuttled back and forth between these two regions, gaining statistically an extra boost in energy every time they “bounce” off the moving set of magnetic field lines. The process is akin to the increasing energy that would be gained by a tennis ball in the absence of air drag if it were banged back and forth between a vigorously swinging player and a stationary wall. The great attractiveness of the strong shock-wave picture for accelerating cosmic rays is that it automatically gives, in the simplest models, a decreasing power-law distribution of particle energies. The exponent is 2 instead of the measured 2.7, and the discrepancy is believed to be related to an energy-dependent escape rate from the region of acceleration. The enhancement of the escape rate with increasing energy is not completely understood, but no fundamental obstacle appears likely in this direction to rule out the shock-acceleration model.

      More serious failings of the shock-acceleration model are that it does not address the acceleration of cosmic-ray electrons, nor does it easily explain the origin of ultra-high-energy cosmic rays, nuclei with energies that lie between 108 and 1011 times the rest energy of the proton. There is some indication from measurements of ultra-high-energy gamma rays from some binary X-ray sources that these objects may copiously produce ultra-high-energy cosmic rays, but the exact acceleration mechanism remains obscure. At the highest observed cosmic-ray energies, the particles arrive preferentially from northern galactic latitudes, a fact interpreted by some to indicate a large contribution from the Virgo supercluster. In this picture even higher-energy cosmic rays from more distant parts of the universe (greater than about 108 light-years) do not reach the Earth, because such particles would suffer serious losses en route as they interact with the photons of the cosmic microwave background.

Microwave background radiation
      Beginning in 1948, the American cosmologist George Gamow and his coworkers, Ralph Alpher and Robert Herman, investigated the idea that the chemical elements might have been synthesized by thermonuclear reactions that took place in a primeval fireball. The high temperature associated with the early universe would give rise to a thermal radiation field, which has a unique distribution of intensity with wavelength (known as Planck's radiation law), that is a function only of the temperature. As the universe expanded, the temperature would have dropped, each photon being redshifted by the cosmological expansion to longer wavelength, as the American physicist Richard C. Tolman had already shown in 1934. By the present epoch the radiation temperature would have dropped to very low values, about 5° above absolute zero (0 K, or −273° C) according to the estimates of Alpher and Herman.

      Interest in these calculations waned among most astronomers when it became apparent that the lion's share of the synthesis of elements heavier than helium must have occurred inside stars rather than in a hot big bang. In the early 1960s physicists at Princeton University, N.J., as well as in the Soviet Union, took up the problem again and began to build a microwave receiver that might detect, in the words of the Belgian cleric and cosmologist Georges Lemaître, “the vanished brilliance of the origin of the worlds.”

      The actual discovery of the relict radiation from the primeval fireball, however, occurred by accident. In experiments conducted in connection with the first Telstar communication satellite, two scientists, Arno Penzias and Robert Wilson, of the Bell Telephone Laboratories, Holmdel, N.J., measured excess radio noise that seemed to come from the sky in a completely isotropic fashion. When they consulted Bernard Burke of the Massachusetts Institute of Technology, Boston, about the problem, Burke realized that Penzias and Wilson had most likely found the cosmic background radiation that Robert H. Dicke, P.J.E. Peebles, and their colleagues at Princeton were planning to search for. Put in touch with one another, the two groups published simultaneously in 1965 papers detailing the prediction and discovery of a universal thermal radiation field with a temperature of about 3 K.

 Precise measurements made by the Cosmic Background Explorer (COBE) satellite launched in late 1989 determined the spectrum to be exactly characteristic of a blackbody at 2.735 K. The velocity of the satellite about the Earth, the Earth about the Sun, the Sun about the Galaxy, and the Galaxy through the universe actually makes the temperature seem slightly hotter (by about one part in 1,000) in the direction of motion rather than away from it. The magnitude of this effect—the so-called dipole anisotropy—allows astronomers to determine that the Local Group of galaxies is moving at a speed of about 600 km/s in a direction that is 45° from the direction of the Virgo cluster of galaxies. Such motion is not measured relative to the galaxies themselves (the Virgo galaxies have an average velocity of recession of about 1,000 km/s with respect to the Milky Way system) but relative to a local frame of reference in which the cosmic microwave background radiation would appear as a perfect Planck spectrum with a single radiation temperature.

      The origin of the “peculiar velocity” of 600 km/s for the Local Group presents an interesting problem. A component of this velocity may be induced by the gravitational attraction of the excess mass above the cosmological mean represented by the Virgo cluster; however, it is now believed that the Virgo component is relatively small, at best 200–300 km/s. A more important contribution may come from the mass of a “Great Attractor” at a distance of 108 light-years connected to the Local Supercluster, but this interpretation is somewhat controversial since much of the supposed grouping lies behind the obscuration of the plane of the Milky Way. In any case, the generation of the large peculiar velocity of the Local Group of galaxies probably requires invoking an augmentation in dark matter of the gravitational attraction of the observable galaxies by a factor of roughly 10.

      The COBE satellite carried instrumentation aboard that allowed it to measure small fluctuations in intensity of the background radiation, not just in the sense of a forward-backward asymmetry but also on angular directions in the sky that correspond to distance scales on the order of 109 light-years across (still larger than the largest material structures seen in the universe, such as the enormous grouping of galaxies dubbed the “Great Wall”). The satellite transmitted an intensity pattern in angular projection at a wavelength of 0.57 cm after the subtraction of a uniform background at a temperature of 2.735 K. Bright regions at the upper right and dark regions at the lower left showed the dipole asymmetry. A bright strip across the middle represented excess thermal emission from the Milky Way. To obtain the fluctuations on smaller angular scales, it was necessary to subtract both the dipole and the galactic contributions. The latter requires a good model for the radio emission from the Galaxy at the relevant wavelengths, for which astronomers possess only incomplete knowledge. Fortunately, the corrections at high galactic latitudes are not very large, and an image was obtained showing the final product after the subtraction. Patches of light and dark represented temperature fluctuations that amount to about one part in 100,000—not much higher than the accuracy of the measurements. Nevertheless, the statistics of the distribution of angular fluctuations appeared different from random noise, and so the members of the COBE investigative team believe that they have found the first evidence for the departure from exact isotropy that theoretical cosmologists have long predicted must be there in order for galaxies and clusters of galaxies to condense from an otherwise structureless universe.

      Apart from the small fluctuations discussed above (one part in 100,000), the observed cosmic microwave background radiation exhibits a high degree of isotropy, a zeroth order fact that presents both satisfaction and difficulty for a comprehensive theory. On the one hand, it provides a strong justification for the assumption of homogeneity and isotropy that is common to most cosmological models. On the other hand, such homogeneity and isotropy are difficult to explain because of the “light-horizon” problem. In the context of the cosmic microwave background, the problem can be expressed as follows. Consider the background radiation coming to an observer from any two opposite sides of the sky. Clearly, whatever are the ultimate sources (hot plasma) of this radiation, the photons, traveling at the speed of light since their emission by the plasma, have only had time to reach the Earth now. The matter on one side of the sky could not have had time to have “communicated” with the matter on the other side (they are beyond each other's light horizon), so how is it possible (with respect to an observer in the right rest frame) that they “know” to have the same temperature to a precision approaching one part in 100,000? What accounts for the high degree of angular isotropy of the cosmic microwave background? Or, for that matter, for the large-scale distribution of galaxies? As will be seen below in the section Cosmological models (Cosmos), a mechanism called “inflation” may offer an attractive way out of this dilemma.

Intergalactic gas
      At one time it was thought that large amounts of mass might exist in the form of gas clouds in the spaces between galaxies. One by one, however, the forms that this intergalactic gas might take were eliminated by direct observational searches until the only possible form that might have escaped early detection was a very hot plasma. Thus, there was considerable excitement and speculation when astronomers found evidence in the early 1970s for a seemingly uniform and isotropic background of hard X radiation (photons with energies greater than 106 electron volts). There also was a diffuse background of soft X rays, but this had a patchy distribution and was definitely of galactic origin—hot gas produced by many supernova explosions inside the Galaxy. The hard X-ray background, in contrast, seemed to be extragalactic, and a uniform plasma at a temperature of roughly 108 K was a possible source. The launch in 1978 of an imaging X-ray telescope aboard the Einstein Observatory (the HEAO 2 satellite), however, showed that a large fraction of the seemingly diffuse background of hard X rays, perhaps all of it, could be accounted for by a superposition of previously unresolved point sources—i.e., quasars and QSOs. Subsequent research demonstrated that the shape of the X-ray spectrum of these objects at low redshifts does not match that of the diffuse background. It is now thought that the residual effect arises from active galactic nuclei at high redshifts (greater than six) and that these objects underwent substantial evolution early in the history of the universe.

      Very hot gas that emits X rays at tens to hundreds of millions of kelvins does indeed reside in the spaces between galaxies in rich clusters, and the amount of this gas seems comparable to that contained in the visible stars of the galaxies; however, because rich clusters are fairly rare in the universe, the total amount of such gas is small compared to the total mass contained in the stars of all galaxies. Moreover, an emission line of iron can frequently be detected in the X-ray spectrum, indicating that the intracluster gas has undergone nuclear processing inside stars and is not of primordial origin.

      About 70 percent of the X-ray clusters show surface brightnesses that are smooth and single-peaked, indicative of distributions of hot gas that rest in quasi-hydrostatic equilibrium in the gravitational potentials of the clusters. Analysis of the data in the better-resolved systems allows astronomers to estimate the total amount of gravitating mass needed to offset the expansive pressure (proportional to the density times the temperature) of the X-ray-emitting gas. These estimates agree with the conclusions from optical measurements of the motions of the member galaxies that galaxy clusters contain about 10 times more dark matter than luminous matter (see below).

      About half of the X-ray clusters with single-peaked distributions have bright galaxies at the centres of the emission. The high central densities of the gas imply radiative cooling times of only 109 years or so. As the gas cools, the central galaxy draws the material inward at inferred rates that often exceed 100 solar masses per year. The ultimate fate of the accreted gas in the “cooling flow” remains unclear.

      Another exciting discovery has been the detection of large clouds of atomic hydrogen gas in intergalactic space unassociated with any known galaxies. These clouds show themselves as unusual absorption lines in the Lyman-alpha transition of atomic hydrogen when they lie as foreground objects to distant quasars. In a few cases they can be mapped by radio techniques at the spin-flip transition of atomic hydrogen (redshifted from the rest wavelength of 21 cm). From the latter studies, some astronomers have inferred that the clouds exist in highly flattened forms (“pancakes”) and may contain up to 1014 solar masses of gas. In one interpretation these structures are the precursors to large clusters of galaxies (see below).

Low-energy neutrinos
      Another hypothesized component of the Cosmos is a universal sea of very low-energy neutrinos. Although nearly impossible to detect by direct means, the existence of this sea has a strong theoretical basis. This basis rests with the notion that a hot big bang would produce not only a primeval fireball of electromagnetic radiation but also enormous numbers of neutrinos and antineutrinos (both referred to in cosmological discussions as neutrinos for brevity's sake). Estimates suggest that every cubic metre of space in the universe contains about 108 low-energy neutrinos. This number considerably exceeds the cosmological density of atomic nuclei (mostly hydrogen) obtained by averaging the known matter in the universe over scales of hundreds of millions of light-years. The latter density amounts to less than one particle per cubic metre of space. Nevertheless, because neutrinos interact with matter only weakly (they do not, for example, emit electromagnetic radiation), they can be detected experimentally by sophisticated instruments only if they have relatively high energies (such as the neutrinos from the Sun or from supernova explosions). The very low-energy neutrinos of cosmological origin cannot be observed by any conventional means known at present.

      Such low-energy neutrinos, nonetheless, attracted considerable astronomical interest during the late 1970s because experiments conducted in the Soviet Union and the United States suggested, contrary to the prevailing belief in particle physics, that neutrinos may possess a nonzero rest mass. Even if the rest mass were very small—say, 10,000 times smaller than the rest mass of the electron, the lightest known particle of matter—the result could be of great potential importance because neutrinos, being so relatively abundant cosmologically, could then be the dominant source of mass in the universe. Unfortunately, later experiments cast doubts on the conclusions of the earlier findings, and theoretical investigations of “massive neutrinos” as the dark matter in the universe turned up as many new difficulties to be explained as possible solutions to old problems. On the other hand, if the solution to the solar-neutrino problem turns out to depend on the existence of neutrino oscillations, massive-neutrino cosmologies may well make a (partial) comeback.

Gravitational waves
      Superficially, there are many similarities between gravity and electricity; for example, Newton's law for the gravitational force between two point masses and Coulomb's law for the electric force between two point charges both vary as the inverse square of the separation distance. Yet, in James Clerk Maxwell's theory for electromagnetism, accelerated charges emit signals (electromagnetic radiation) that travel at the speed of light, whereas in Newton's theory of gravitation accelerated masses transmit information (action at a distance) that travels at infinite speed. This dichotomy is repaired by Einstein's theory of gravitation, wherein accelerated masses also produce signals (gravitational waves) that travel only at the speed of light. And, just as electromagnetic waves can make their presence known by the pushing to and fro of electrically charged bodies, so can gravitational waves be detected, in principle, by the tugging to and fro of massive bodies. However, because the coupling of gravitational forces to masses is intrinsically much weaker than the coupling of electromagnetic forces to charges, the generation and detection of gravitational radiation are much more difficult than those of electromagnetic radiation. Indeed, since the time of Einstein's invention of general relativity in 1916, there has yet to be a single instance of the detection of gravitational waves that is direct and undisputed.

      There are, however, some indirect pieces of evidence that accelerated astronomical masses do emit gravitational radiation. The most convincing concerns radio-timing observations of a pulsar located in a binary star system with an orbital period of 7.75 hours. This object, discovered in 1974, has a pulse period of about 59 milliseconds that varies by about one part in 1,000 every 7.75 hours. Interpreted as Doppler shifts, these variations imply orbital velocities on the order of 1/1000 the speed of light. The non-sinusoidal shape of the velocity curve with time allows a deduction that the orbit is quite noncircular (indeed, an ellipse of eccentricity 0.62 whose long axis precesses in space by 4.2° per year). It is now believed that the system is composed of two neutron stars, each having a mass of about 1.4 solar masses, with a semimajor axis separation of only 2.8 solar radii. According to Einstein's theory of general relativity, such a system ought to be losing orbital energy through the radiation of gravitational waves at a rate that would cause them to spiral together on a time scale of about 3 × 108 years. The observed decrease in the orbital period in the years since the discovery of the binary pulsar does indeed indicate that the two stars are spiraling toward one another at exactly the predicted rate.

      The implosion of the core of a massive star to form a neutron star prior to a supernova explosion, if it takes place in a nonspherically symmetric way, ought to provide a powerful burst of gravitational radiation. Simple estimates yield the release of a fraction of the mass-energy deficit, roughly 1053 ergs, with the radiation primarily coming out at wave periods between the vibrational period of the neutron star, approximately 0.3 millisecond, and the gravitational-radiation damping time, about 300 milliseconds.

      A cosmic background of gravitational waves is a possibility that has sometimes been discussed. Such a background might be generated if the early universe expanded in a chaotic fashion rather than in the smooth homogeneous fashion that it is currently observed to do. The energy density of the gravitational waves produced, however, is unlikely to exceed the energy density of electromagnetic radiation, and each graviton (the gravitational analogue of the photon) would be susceptible to the same cosmological redshift by the expansion of the universe. A roughly thermal distribution of gravitons at a present temperature of about 1 K would be undetectable by foreseeable technological developments in gravitational-wave astronomy.

      Numerous candidates for the dark matter component in the halos of galaxies and clusters of galaxies have been proposed over the years, but no successful detection of any of them has yet occurred. If the dark matter is not made of the same material as the nuclei of ordinary atoms, then it may consist of exotic particles capable of interacting with ordinary matter only through the gravitational and weak nuclear forces. The latter property lends these hypothetical particles the generic name WIMPs, after weakly interacting massive particles. Even if WIMPs bombarded each square centimetre of the Earth at a rate of one per second (as they would do if they had, for example, individually 100 times the mass of a proton and collectively enough mass to “close” the universe; see below), they would then still be extremely difficult—though not impossible—to detect experimentally.

      Another possibility is that the dark matter is (or was) composed of ordinary matter at a microscopic level but is essentially nonluminous at a meaningful astronomical level. Examples would be brown dwarfs (starlike objects too low in mass to fuse hydrogen in their interiors), dead white dwarfs, neutron stars, and black holes. If the objects are only extremely faint (e.g., brown dwarfs), they can eventually be found by very sensitive searches, perhaps at near-infrared wavelengths. On the other hand, if they emit no light at all, then other strategies will be needed to find them—for example, to search halo stars for evidence of “microlensing” (i.e., the temporary amplification of the brightness of background sources through the gravitational bending of their light rays).

Large-scale structure and expansion of the universe
      Hubble inferred a uniformity in the spatial distribution of galaxies through number counts in deep photographic surveys of selected areas of the sky. This inference applies only to scales larger than several times 108 light-years. On smaller scales, galaxies tend to bunch together in clusters and superclusters, and Hubble deliberately avoided the more conspicuous examples in order not to bias his results. This clustering did excite debate among both observers and theorists in the earliest discussions of cosmology, particularly over the largest dimensions where there are still appreciable departures from homogeneity and over the ultimate cause of the departures. In the 1950s and early 1960s, however, attention tended to focus on homogeneous cosmological models because of the competing ideas of the big bang and steady state scenarios. Only after the discovery of the cosmic microwave background—which, together with the successes of primordial nucleosynthesis, signaled a clear victory for the hot big bang picture—did the issue of departures from homogeneity in the universe again attract widespread interest.

      From a more pragmatic point of view, clusters and groups of galaxies are important to cosmological studies because they are useful in establishing the extragalactic distance scale. A fundamental problem that recurs over and over again in astronomy is the determination of the distance to an object. Individual stars in star clusters and associations provide an indispensable tool in gauging distances within the Galaxy. The brightest stars—in particular the brightest variable stars among the so-called Cepheid class—allow the distance ladder to be extended to the nearest galaxies; but at distances much larger than 107 light-years individual stars become too difficult to resolve, at least from the ground, and astronomers have traditionally resorted to other methods.

Clustering of galaxies
 Clusters of galaxies fall into two morphological categories: regular and irregular. The regular clusters show marked spherical symmetry and have a rich membership. Typically, they contain thousands of galaxies, with a high concentration toward the centre of the cluster. Rich clusters, such as the Coma cluster, are deficient in spiral galaxies and are dominated by ellipticals and S0s. The irregular clusters have less well-defined shapes, and they usually have fewer members, ranging from fairly rich systems such as the Hercules cluster to poor groups that may have only a few members. Galaxies of all types can be found in irregular clusters: spirals and irregulars, as well as ellipticals and S0s. Most galaxies are to be found not in rich clusters but in loose groups. The Galaxy belongs to one such loose group—the Local Group.

The Local Group
 The Local Group contains seven reasonably prominent galaxies and perhaps another two dozen less conspicuous members. The dominant pair in the group is the Milky Way and Andromeda, both giant spirals of Hubble type Sb and luminosity class II. The distance to the Andromeda system was first measured by Hubble, but his estimate was too low by a factor of two because astronomers at that time did not recognize the distinction between variable stars belonging to Population II (like those studied by Shapley) and Population I (those studied by Hubble). Another spiral in the Local Group—M33, Hubble type Sc and luminosity class III—is notable, but the rest are intermediate to dwarf systems, either irregulars or ellipticals. Most of the mass of the Local Group is associated with the Milky Way and Andromeda, and with a few exceptions the smaller systems tend to congregate about one or the other of these galaxies. The size of the Local Group is therefore larger only by about 50 percent than the 2 × 106 light-years separating the Milky Way system and the Andromeda galaxy, and the centre of mass lies roughly halfway between these two giants.

      The Andromeda galaxy is one of the few galaxies in the universe that actually has a velocity of approach with respect to the centre of the Galaxy. If this approach results from the reversal by the mutual gravitational attraction of a former recession, then the total mass of the Local Group probably amounts to a few times 1012 solar masses. This is greater than the mass inferred for the optically visible parts of the galaxies and is another manifestation of the dark matter problem.

Neighbouring groups and clusters
      Beyond the fringes of the Local Group lie many similar small groups. The best studied of these is the M81 group, whose dominant galaxy is the spiral galaxy M81. Much like the Andromeda and Milky Way systems, M81 is of Hubble type Sb and luminosity class II. The distance to M81, as well as to the outlying galaxy NGC 2403, can be determined from various stellar calibrators to be at a distance of 107 light-years. It is not known whether NGC 2403 and its companion NGC 2366 are truly bound to M81 or whether they are an independent pair seen by chance to lie near the M81 group. If they are bound to M81, then a measurement of their velocity along the line of sight relative to that of M81 yields, by an argument similar to that used for the Andromeda and Milky Way galaxies, an estimate of the gravitating mass of M81. This estimate equals 2 × 1012 solar masses and exceeds by an order of magnitude what is deduced from measurements of the rotation curve of M81 inside its optically visible disk.

      The M81 group also has a few normal galaxies with classifications similar to those of galaxies in the Local Group, and it was noticed by some astronomers that the linear sizes of the largest H II regions (which are illuminated by many OB stars) in these galaxies had about the same intrinsic sizes as their counterparts in the Local Group. This led Allan Sandage and the German chemist and physicist Gustav Tammann to the (controversial) technique of using the sizes of H II regions as a distance indicator, because a measurement of their angular sizes, coupled with knowledge of their linear sizes, allows an inference of distance.

      This method can be used, for example, to obtain the distance to the M101 group, whose dominant galaxy M101 is a supergiant spiral—the closest system of Hubble type Sc and luminosity class I. Since Sc I galaxies are the most luminous spiral galaxies, with very large H II regions strung out along their spiral arms, determining the distance to M101 is a crucial step in obtaining the absolute sizes of the giant H II regions of these important systems. The sizes of the H II regions in the companion galaxies of M101 compared with the calibrated values for nearby galaxies of the same class yield a distance to the M101 group of approximately 2 × 107 light-years.

      Having calibrated the sizes of the giant H II regions in M101, Sandage and Tammann could then obtain the distances to 50 field Sc I galaxies. Once this had been done, it became possible to measure the absolute brightnesses of Sc I galaxies, and it was ascertained that all such systems have nearly the same luminosity. Since Sc I galaxies like M101 or M51 can be recognized on purely morphological grounds (well-organized spiral structure with massive arms dominated by giant H II regions), they can now be used as “standard candles” to help measure the distances to irregular clusters that contain such galaxies (e.g., the Virgo cluster containing the Sc I galaxy M100).

      The Virgo cluster is the closest large cluster and is located at a distance of about 5 × 107 light-years in the direction of the constellation Virgo. About 200 bright galaxies reside in the Virgo cluster, scattered in various subclusters whose largest concentration (near the famous system M87) is about 5 × 106 light-years in diameter. Of the galaxies in the Virgo cluster, 68 percent are spirals, 19 percent are ellipticals, and the rest are irregulars or unclassified. Although spirals are more numerous, the four brightest galaxies are giant ellipticals, among them M87. Calibration of the absolute brightnesses of these giant ellipticals allows a leap to the distant regular clusters.

      The nearest rich cluster containing thousands of systems, the Coma cluster, lies about seven times farther than the Virgo cluster in the direction of the constellation Coma Berenices. The main body of the Coma cluster has a diameter of about 2.5 × 107 light-years, but enhancements above the background can be traced out to a supercluster of a diameter of about 2 × 108 light-years. Ellipticals or S0s constitute 85 percent of the bright galaxies in the Coma cluster; the two brightest ellipticals in Coma are located near the centre of the system and are individually more than 10 times as luminous as the Andromeda galaxy. These galaxies have a swarm of smaller companions orbiting them and may have grown to their bloated sizes by a process of “galactic cannibalism” like that hypothesized to explain the supergiant elliptical cD systems (see above).

      The spatial distribution of galaxies in rich clusters such as the Coma cluster closely resembles what one would expect theoretically for a bound set of bodies moving in the collective gravitational field of the system. Yet, if one measures the dispersion of random velocities of the Coma galaxies about the mean, one finds that it amounts to almost 900 km/sec. For a galaxy possessing this random velocity along a typical line of sight to be gravitationally bound within the known dimensions of the cluster requires Coma to have a total mass of about 5 × 1015 solar masses. The total luminosity of the Coma cluster is measured to be about 3 × 1013 solar luminosities; therefore, the mass-to-light ratio in solar units required to explain Coma as a bound system exceeds by an order of magnitude what can be reasonably ascribed to the known stellar populations. A similar situation exists for every rich cluster that has been examined in detail. This dark matter problem for rich clusters was known to the Swiss astronomer Fritz Zwicky as early as 1933. The discovery of X-ray-emitting gas in rich clusters has alleviated the dynamic problem by a factor of about two, but a substantial discrepancy remains.

Superclusters
      In 1932 Harlow Shapley and Adelaide Ames introduced a catalog that showed the distributions of galaxies brighter than 13th magnitude to be quite different north and south of the plane of the Galaxy. Their study was the first to indicate that the universe might contain substantial regions that departed from the assumption of homogeneity and isotropy. The most prominent feature in the maps they produced in 1938 was the Virgo cluster, though already apparent at that time were elongated appendages that stretched on both sides of Virgo to a total length exceeding 5 × 107 light-years. This configuration is the kernel of what came to be known later—through the work of Erik Holmberg, Gérard de Vaucouleurs, and George O. Abell—as the Local Supercluster, a flattened collection of about 100 groups and clusters of galaxies including the Local Group. The Local Supercluster is centred approximately on the Virgo cluster and has a total extent of roughly 2 × 108 light-years. Its precise boundaries, however, are difficult to define inasmuch as the local enhancement in numbers of galaxies above the cosmological average in all likelihood just blends smoothly into the background.

      Also apparent in the Shapley-Ames maps were three independent concentrations of galaxies, separate superclusters viewed from a distance. Astronomers now believe superclusters fill perhaps 10 percent of the volume of the universe. Most galaxies, groups, and clusters belong to superclusters, the space between superclusters being relatively empty. The dimensions of superclusters range up to a few times 108 light-years. For larger scales the distribution of galaxies is essentially homogeneous and isotropic—that is, there is no evidence for the clustering of superclusters. This fact can be understood by recognizing that the time it takes a randomly moving galaxy to traverse the long axis of a supercluster is typically comparable to the age of the universe. Thus, if the universe started out homogeneous and isotropic on small scales, there simply has not been enough time for it to become inhomogeneous on scales much larger than superclusters. This interpretation is consistent with the observation that superclusters themselves look dynamically unrelaxed—that is, they lack the regular equilibrium shapes and central concentrations that typify systems well mixed by several crossings.

Statistics of clustering
      The description of galaxy clustering given above is qualitative and therefore open to a charge of faulty subjective reasoning. To remove human biases it is possible to take a statistical approach, a path pioneered by the American statisticians Jerzy Neyman and Elizabeth L. Scott and extended by H. Totsuji and T. Kihara in Japan and by P.J.E. Peebles and his coworkers in the United States. Their line of attack begins by considering the correlation of the angular positions of galaxies in the northern sky surveyed by C.D. Shane and C.A. Wirtanen of Lick Observatory, Mount Hamilton, Calif. If the intrinsic distribution in the direction along the line of sight is assumed to be similar to that across it, then it is possible to derive from the analysis the two-point correlation function that expresses the joint probability for finding two galaxies in certain positions separated by a distance r. Of special interest is the enhancement in the probability above a random distribution of locations well represented, up to scales of about 5 × 107 light-years, as a simple power law, (r/r0)−1.8, with r0 equal to about 2 × 107 light-years. Beyond 5 × 107 light-years, the enhancement drops more quickly with distance than r−1.8, but the exact way it does this is somewhat controversial.

      To summarize, then, when one knows a galaxy to be present, there is a considerable statistical enhancement in the likelihood that other galaxies will be near it for distances of 5 × 107 light-years or less, whereas at much larger distances the probability drops off to the expectation for a purely random distribution in space. This result provides a quantitative expression for the phenomenon of galaxy clustering. A similar power-law representation seems to hold for the correlation of galaxy clusters; this provides empirical evidence for the phenomenon of superclustering.

 In addition to angular positions, it is possible to derive empirical information about the large-scale distribution of galaxies in the direction along the line of sight by examining the redshifts of galaxies under the assumption that a larger redshift implies a greater distance in accordance with Hubble's law. A number of groups have carried out such a program, some in fairly restricted areas of the sky and others over larger regions but to shallower depths. A primary finding of such surveys is the existence of huge holes and voids, regions of space measuring hundreds of millions of light-years across where galaxies seem notably deficient or even totally absent. The presence of holes and voids forms, in some sense, a natural complement to the idea of superclusters, but the surprising result is the degree of the density contrast between the large-scale regions where galaxies are found and those where they are not.

Gravitational theories of clustering
      The fact that gravitation affects all masses may explain why the astronomical universe, although not uniform, contains structure. This natural idea, which is the basis of much of the modern theoretical work on the problem, had already occurred to Newton in 1692. Newton wrote to the noted English scholar and clergyman Richard Bentley:

It seems to me, that if the matter of our Sun & Planets & all ye matter in the Universe was eavenly scattered throughout all the heavens, & every particle had an innate gravity towards all the rest & the whole space throughout wch [sic] this matter was scattered was but finite: the matter on ye outside of this space would by its gravity tend towards all ye matter on the inside & by consequence fall down to ye middle of the whole space & there compose one great spherical mass. But if the matter was eavenly diffused through an infinite space, it would never convene into one mass but some of it convene into one mass & some into another so as to make an infinite number of great masses scattered at great distances from one to another throughout all yt infinite space. And thus might ye Sun and Fixt stars be formed supposing the matter were of a lucid nature.

Modes of gravitational instability
      It was the English physicist and mathematician Sir James Jeans who in 1902 first provided a quantitative criterion for the picture of gravitational instability speculated on by Newton. Jeans considered the idealized initial state of a homogeneous static gas of infinite extent and uniform temperature and asked under what conditions the compressed portions of a small sinusoidal fluctuation would continue to contract gravitationally and become denser and denser (eventually to form galaxies and stars presumably) rather than re-expand because of the increased internal pressure. He found that for gravitational instability to occur the wavelength of the density fluctuation had to exceed a certain critical value, now called the Jeans length, which is proportional to the square root of the ratio of temperature to density.

      Two new considerations enter to modify the picture in a universe that begins with a hot big bang: the expansion of the background and the coexistence with matter of a thermal radiation field. The expansion of the background causes the dense portions of unstable small fluctuations to grow much more slowly, at least at first, than the static Jeans theory—as a power of time rather than as an exponential. The thermal radiation field causes greater complications.

      First, the existence of a component in the universe other than ordinary matter, radiation, means that one has to specify—particularly in the early stages of the expansion when the energy density of radiation dominates that of matter—whether the radiation field fluctuates together with matter or whether it maintains a uniform level inside which matter fluctuates. Density fluctuations of the first type are called adiabatic perturbations, and those of the second type isothermal (isocurvature) perturbations (because the temperature of the radiation field remains uniform in space and the matter temperature locally equals that of the radiation when they are well coupled).

      In the early universe when the radiation temperature was high and matter existed as a highly ionized plasma, neither adiabatic nor isothermal fluctuations could grow, because the intense radiation field resisted compression and, through its strong coupling to ionized matter, prevented the latter also from contracting relative to the overall expansion of the universe. Indeed, the tendency for the excess radiation in the compressed regions of adiabatic fluctuations to try to diffuse out of such regions implies that such fluctuations tend to decay. Therefore, given an arbitrary initial spectrum of adiabatic fluctuations, only those with a large enough scale can survive the decay for the age of the universe up to that point.

      Decoupling between ordinary matter and radiation occurs when the temperature drops low enough for free (hydrogen) ions and electrons to recombine. When electrons become attached to atoms, they have a much smaller cross section for interaction with photons than when they were free. This occurs for reasonable cosmological models at a temperature of about 4,000 K. At this time, by coincidence (but perhaps ultimately one of great physical significance), the energy density also begins to drop below the rest-energy density of matter, and the universe turns from being radiation-dominated to being matter-dominated. Past the decoupling epoch, the density fluctuations of the type previously labeled isothermal can grow if they satisfy the original Jeans criterion, whereas those previously labeled adiabatic can grow only if they have survived the prior epoch of damping. Calculations indicate that the smallest unstable fragment of the former type has a mass comparable to that of a globular cluster, while that of the latter type has a mass comparable to that of a giant galaxy or of a large cluster of galaxies, depending on various assumptions.

      Among these assumptions is the choice of the form of the dark matter or hidden mass. If the hidden mass is not ordinary matter but instead is contained in exotic forms of elementary particles whose properties have yet to be deciphered, then one needs to specify if and when this hidden mass decouples from the thermal radiation field. Two extremes are often considered: “warm” dark matter and “cold” dark matter. Warm dark matter is typified by such hypothetical particles as neutrinos that have small but nonzero rest mass, which decouple relatively early from the radiation field. Particles of this sort stream freely (nearly at the speed of light in the early universe) and erase initial fluctuations of all scale smaller than a critical coherence length (analogous to but larger than the critical scale introduced by photons for adiabatic fluctuations), above which self-gravity can finally cause growth (when the neutrinos are moving much less rapidly). Cold dark matter is typified by particles that interact only weakly with radiation and ordinary matter and that have sufficient rest mass so as always to possess random thermal motions much less than the speed of light at any stage relevant to the problem of galaxy formation. Density fluctuations of such particles can grow in a fashion similar to that described for isothermal fluctuations of ordinary matter after decoupling; therefore, on the scale of galaxies and larger groups, cold dark matter possesses no coherence length. In either picture, warm or cold, the dark component of the universe supposedly forms a lumpy background into whose concentrations ordinary matter falls eventually to produce galaxies and stars.

Top-down and bottom-up theories
      The scenarios described in the previous subsection turn out, in the extremes, to lead to two different pictures for the origin of large-scale structure in the universe, which can be given the labels “top-down” and “bottom-up.” In top-down theories the regions with the largest scale sizes, comparable to superclusters and clusters, collapse first, yielding flat gaseous “pancakes” of ordinary matter (a description coined by the primary proponent of this theory, the physicist Yakov B. Zeldovich of Russia) from which galaxies condense. In bottom-up theories the regions with the smallest scale sizes, comparable to galaxies or smaller, form first, giving rise to freely moving entities that subsequently aggregate gravitationally (perhaps by a hierarchal process) to produce clusters and superclusters of galaxies. Adiabatic fluctuations of ordinary matter tend to yield a top-down picture, and isothermal fluctuations a bottom-up picture. When hidden mass is added to the calculations, warm dark matter tends to give a top-down picture, and cold dark matter a bottom-up picture.

      To make comparisons with observational data, the spectrum (dependence of amplitudes with size scale) of the initial fluctuations are needed as input to numerical simulations on a computer to follow the subsequent growth of structure. The shape of the spectrum is specified by heuristic arguments given first by Zeldovich and the American cosmologist Edward R. Harrison, and the results were later rederived from a first principles calculation of a quantum origin of the universe involving cosmic inflation (see below). Workers must use, however, measurements of the anisotropy of the cosmic microwave background to obtain (or set limits on) the absolute starting amplitudes. When this is done and models are computed, it is found that top-down theories tend to give a better but still imperfect account of the observed spatial distributions (flattened superclusters and large holes and voids) and streaming motions of galaxies. Unfortunately, cluster formation and galaxy formation take place at a redshift z less than 1, too recently relative to the present epoch to be compatible with the observational data. The measurements of the anisotropies of the cosmic microwave background severely limit the amount of power that can exist in the starting adiabatic perturbations, and so the growth to observed structures takes too long to complete. Moreover, neutrinos with their large coherence length probably cannot explain the hidden mass that is inferred to reside in the dark halos of individual galaxies.

      Bottom-up theories that include cold dark matter can yield objects with the proper masses (i.e., dark halos), density profiles, and angular momenta to account for the observed galaxies, but they fail to explain the largest-scale structures (on the order of a few times 108 light-years) seen in the clustering data. A possible escape from this difficulty lies in the suggestion that the distribution of galaxies (made mostly of ordinary matter) may not trace the distribution of mass (made mostly of cold dark matter). This scheme, called biased galaxy formation, may have a physical basis if it can be argued that galaxies form only from fluctuations that exceed a certain threshold level. Local upward fluctuations in density on a small scale have a better chance to exceed the threshold if they happen to lie in a large region that has somewhat higher than average densities. This bias then produces galaxies with positions that correlate on a large scale better than the underlying distribution of dark matter whose gravitational clustering has no such threshold effect. Unfortunately, counter simulations show that no amount of biasing can reproduce both the large-scale spatial structure and the magnitude of the observed large-scale streaming motions.

      On the problem of the formation of galaxies and large-scale structure by purely gravitational means, therefore, cosmologists face the following dilemma. The universe in the large appears to require aspects of both top-down and bottom-up theories. Perhaps this implies that the hidden mass consists of roughly equal mixtures of warm dark matter and cold dark matter, but adopting such a solution seems rather artificial without additional supporting evidence.

Unorthodox theories of clustering and galaxy formation
      Given the somewhat unsatisfactory state of affairs with gravitational theories for the origin of large-scale structure in the universe, some cosmologists have abandoned the orthodox approach altogether and have sought alternative mechanisms. One of the first to be considered was primordial turbulence. This idea enjoys little current favour for a variety of reasons, the most severe being the following. Because it tends to decay over time, turbulence of a magnitude sufficient to cause galaxy formation after decoupling would have had to be much larger during earlier epochs. This seems both unlikely and unnatural. Too delicate a balance is required for primordial turbulence to produce galaxies rather than, say, black holes.

      Another suggestion is that energetic galactic explosions due to the formation of a first wave of massive stars may have compressed large shells of intergalactic gas that subsequently became the sites for further galaxy formation and more explosions. Such a picture is attractive because it predicts large holes and voids with galaxies at the interfaces, but it does not avoid the criticism that a “seed” galaxy needs to be formed at the centre of each shell by some other process. If such a process exists, why should it not be the dominant mechanism?

      Finally, there is a suggestion that galaxy and cluster formation might take place by accretion around “cosmic strings.” Cosmic strings, long strands or loops of mass-energy, are a consequence of some theories of elementary particle physics. They are envisaged to arise from phase transitions in the very early universe in a fashion analogous to the way faults can occur in a crystal that suffers dislocations because of imperfect growth from, say, a liquid medium. The dynamic properties of cosmic strings are imperfectly understood, but arguments exist that suggest they may give a clustering hierarchy similar to that observed for galaxies. Unfortunately, the same particle physics that produces cosmic strings also produces magnetic monopoles (isolated magnetic charges), whose possible abundance in the universe can be constrained by observations and experiments to lie below very low limits. Particle physicists like to explain the absence of magnetic monopoles in the Cosmos by invoking for the very early universe the mechanism of inflation (see below). The same mechanism would also inflate away cosmic strings.

      In summary, it can be seen that mechanisms alternative to the growth of small initial fluctuations by self-gravitation all have their own difficulties. Most astronomers hope some dramatic new observation or new idea may yet save the gravitational instability approach, whose strongest appeal has always been the intuitive notion that the force that dominates the astronomical universe, gravity, will automatically promote the growth of irregularities. But, until a complete demonstration is provided, the lack of a simple convincing picture of how galaxies form and cluster will remain one of the prime failings of the otherwise spectacularly successful hot big bang theory.

The extragalactic distance scale and Hubble's constant
      It was noted earlier that the galaxies in the Virgo cluster had an average recession velocity v (as measured by their redshift) of roughly 1,000 km/sec with respect to the Local Group. If the distance r to the Virgo cluster is 5 × 107 light-years and if the Virgo galaxies can be assumed to be far enough away to partake in the general Hubble flow, then the application of the Hubble law, v = H0r, yields Hubble's constant as H0 = 20 km/sec per million light-years. The reciprocal of Hubble's constant is called the Hubble time; for the value given above, H0−1 = 1.5 × 1010 years.

      The most naive interpretation for the Hubble time is that of a free expansion of the universe, wherein a Hubble time ago the distant galaxies started receding from one another (in particular, from the Milky Way system), reaching a distance r = vH0−1 in time H0−1 if they fly away at speed v, the fastest receding galaxies getting the farthest away. Rearranging terms yields the Hubble law v = H0r. The interpretation is naive in two respects: (1) it ignores the role of gravitation in slowing down the expansion, so that Hubble's “constant” does not always have the value it does at the present epoch; and (2) it overlooks the part played by gravitation in regulating the global structure of space-time, so that the interpretation of the “velocity” v and “distance” r is modified when distances or redshifts approach values such that v given by the above formula becomes comparable to or exceeds the speed of light. Nevertheless, as will be seen in the discussion of relativistic cosmologies below, the Hubble time does provide a useful rough estimate for the age of the universe.

      The exact value of Hubble's constant is an issue of great controversy among astronomers. Modern estimates for H0 range from 15 to 30 km/sec per million light-years. The source of the discrepancy lies partly in the interpretation of the amount of distortion superimposed atop a pure Hubble flow by the gravitational effects of the Local Supercluster in which the Local Group and the Virgo cluster are embedded and partly in the different calibrators used or emphasized by different workers for the distances to various extragalactic objects.

      To avoid the first complication, the interpretation of the velocity field in the Local Supercluster, it is possible to examine the redshift-distance relation implied by Sandage's and Tammann's study of 50 Sc I galaxies. There is little controversy that these distant galaxies do empirically satisfy the idealized linear relationship of the Hubble law. The faintest galaxies in the sample have recession velocities of 9,000 km/sec, and, if they lie at the calibrated distance of 600 million light-years, then H0 = 15 km/sec per million light-years, the same value as Sandage and Tammann derived from their study of the Virgo cluster. Unfortunately, many workers do not accept the determination of Sandage and Tammann of the distances to the nearest Sc I galaxies (in particular, M101). They regard as suspect the technique using the sizes of H II regions as a distance indicator. These astronomers advocate using the relationship found to exist between the luminosity L of a spiral galaxy and the velocity V of its (flat) rotation curve, L proportional to V4, as a basis for measuring extragalactic distances, and they obtain values for H0 that lie on the high end of the range cited above.

      As discussed earlier, the classical means of obtaining the distance to the Virgo cluster (a crucial accomplishment) relies on a bootstrap operation to pull the observer up the extragalactic distance ladder one step at a time. The problem with the method is that errors at one level propagate to the next. For this reason, some astronomers prefer using supernova explosions, which can be seen at great distances, to get from the Local Group to the Virgo cluster in one jump. Two basic methods have been developed, one using supernovas of type Ia and the other employing supernovas of type II.

      Type Ia supernovas are believed to arise in interacting binaries from the thermonuclear explosion of a carbon-oxygen white dwarf pushed beyond the Chandrasekhar limit by mass transfer from a neighbouring companion star. In the process a fixed amount of radioactive nickel-56 is believed to be produced, whose subsequent decay into cobalt-56 and then to stable iron-56 is thought to power the entire light curve in these events. As a consequence of the uniformity of the underlying processes, type Ia supernovas serve, in principle, as excellent “standard candles” to obtain extragalactic distances. In practice, the uniformity of the underlying conditions has been questioned as being controversial.

      Type II supernovas arise when evolved massive stars undergo core collapse, a partial rebound, and an expulsion of the (hydrogen-rich) envelope. Except for a scale factor, the shape of the subsequent light curve allows astronomers to infer a changing size for the rapidly expanding atmosphere. The scale can be obtained by measuring the Doppler shift (yielding the velocity, or time-rate of change of the radius, in kilometres per second) of the same layers of gas. Once the absolute size has been fixed, the absolute brightness can be deduced.

      From the deduced absolute brightness and the measured apparent brightness, the distance to the supernova can then be obtained. In principle, the method could be applied to supernovas of all types; in practice, good knowledge of the opacities is needed to correct for the difference in depth observed in the spectral lines (for the Doppler-shift measurements) and in the continuum light (for the light-curve measurements). Such knowledge is reliable only when the composition of the atmospheric layers is rich in hydrogen.

      The supernova techniques tend to yield values of H toward the low end of the range 15 to 30 km/sec per million light-years. For the sake of definiteness, this article adopts the value H0 = 20 km/sec per million light-years, but it should be noted that uncertainties of the magnitude discussed still remain. The corollary of this warning is that the distances quoted for extragalactic objects also are uncertain by the same factor.

Cosmological models

Early cosmological ideas
      Immediate issues that arise when anyone contemplates the universe at large are whether space and time are infinite or finite. And after many centuries of thought by some of the best minds, humanity has still not arrived at conclusive answers to these questions. Aristotle's answer was that the material universe must be spatially finite, for if stars extended to infinity, they could not perform a complete rotation around the Earth in 24 hours. Space must then itself also be finite because it is merely a receptacle for material bodies. On the other hand, the heavens must be temporally infinite, without beginning or end, since they are imperishable and cannot be created or destroyed.

      Except for the infinity of time, these views came to be accepted religious teachings in Europe before the period of modern science. The most notable person to publicly express doubts about restricted space was the Italian philosopher-mathematician Giordano Bruno, who asked the obvious question that, if there is a boundary or edge to space, what is on the other side? For his advocacy of an infinity of suns and earths, he was burned at the stake in 1600.

      In 1610 Kepler provided a profound reason for believing that the number of stars in the universe had to be finite. If there were an infinity of stars, he argued, then the sky would be completely filled with them and night would not be dark! This point was rediscussed by the astronomers Edmond Halley and Jean-Philippe-Loys de Chéseaux of Switzerland in the 18th century, but it was not popularized as a paradox until Heinrich Wilhelm Olbers of Germany took up the problem in the 19th century. The difficulty became potentially very real with Hubble's measurement of the enormous extent of the universe of galaxies with its large-scale homogeneity and isotropy. His discovery of the systematic recession of the galaxies provided an escape, however. At first people thought that the redshift effect alone would suffice to explain why the sky is dark at night—namely, that the light from the stars in distant galaxies would be redshifted to long wavelengths beyond the visible regime. The modern consensus is, however, that a finite age for the universe is a far more important effect. Even if the universe is spatially infinite, photons from very distant galaxies simply do not have the time to travel to the Earth because of the finite speed of light. There is a spherical surface, the cosmic event horizon (roughly 1010 light-years in radial distance from the Earth at the current epoch), beyond which nothing can be seen even in principle; and the number (roughly 1010) of galaxies within this cosmic horizon, the observable universe, are too few to make the night sky bright.

      When one looks to great distances, one is seeing things as they were a long time ago, again because light takes a finite time to travel to Earth. Over such great spans, do the classical notions of Euclid concerning the properties of space necessarily continue to hold? The answer given by Einstein was: No, the gravitation of the mass contained in cosmologically large regions may warp one's usual perceptions of space and time; in particular, the Euclidean postulate that parallel lines never cross need not be a correct description of the geometry of the actual universe. And in 1917 Einstein presented a mathematical model of the universe in which the total volume of space was finite yet had no boundary or edge. The model was based on his theory of general relativity that utilized a more generalized approach to geometry devised in the 19th century by the German mathematician Bernhard Riemann.

Gravitation and the geometry of space-time
      The physical foundation of Einstein's view of gravitation, general relativity, lies on two empirical findings that he elevated to the status of basic postulates. The first postulate is the relativity principle: local physics is governed by the theory of special relativity. The second postulate is the equivalence principle: there is no way for an observer to distinguish locally between gravity and acceleration. The motivation for the second postulate comes from Galileo's observation that all objects—independent of mass, shape, colour, or any other property—accelerate at the same rate in a (uniform) gravitational field.

      Einstein's theory of special relativity, which he developed in 1905, had as its basic premises (1) the notion (also dating back to Galileo) that the laws of physics are the same for all inertial observers and (2) the constancy of the speed of light in a vacuum—namely, that the speed of light has the same value (3 × 1010 cm/sec) for all inertial observers independent of their motion relative to the source of the light. Clearly, this second premise is incompatible with Euclidean and Newtonian precepts of absolute space and absolute time, resulting in a program that merged space and time into a single structure, with well-known consequences. The space-time structure of special relativity is often called “flat” because, among other things, the propagation of photons is easily represented on a flat sheet of graph paper with equal-sized squares. Let each tick on the vertical axis represent one light-year (9.46 × 1017 cm) of distance in the direction of the flight of the photon, and each tick on the horizontal axis represent the passage of one year (3.16 × 107 sec) of time. The propagation path of the photon is then a 45° line because it flies one light-year in one year (with respect to the space and time measurements of all inertial observers no matter how fast they move relative to the photon).

      The principle of equivalence in general relativity allows the locally flat space-time structure of special relativity to be warped by gravitation, so that (in the cosmological case) the propagation of the photon over thousands of millions of light-years can no longer be plotted on a globally flat sheet of paper. To be sure, the curvature of the paper may not be apparent when only a small piece is examined, thereby giving the local impression that space-time is flat (i.e., satisfies special relativity). It is only when the graph paper is examined globally that one realizes it is curved (i.e., satisfies general relativity).

      In Einstein's 1917 model of the universe, the curvature occurs only in space, with the graph paper being rolled up into a cylinder on its side, a loop around the cylinder at constant time having a circumference of 2πR—the total spatial extent of the universe. Notice that the “radius of the universe” is measured in a “direction” perpendicular to the space-time surface of the graph paper. Since the ringed space axis corresponds to one of three dimensions of the actual world (any will do since all directions are equivalent in an isotropic model), the radius of the universe exists in a fourth spatial dimension (not time) which is not part of the real world. This fourth spatial dimension is a mathematical artifice introduced to represent diagrammatically the solution (in this case) of equations for curved three-dimensional space that need not refer to any dimensions other than the three physical ones. Photons traveling in a straight line in any physical direction have trajectories that go diagonally (at 45° angles to the space and time axes) from corner to corner of each little square cell of the space-time grid; thus, they describe helical paths on the cylindrical surface of the graph paper, making one turn after traveling a spatial distance 2πR. In other words, always flying dead ahead, photons would return to where they started from after going a finite distance without ever coming to an edge or boundary. The distance to the “other side” of the universe is therefore πR, and it would lie in any and every direction; space would be closed on itself.

      Now, except by analogy with the closed two-dimensional surface of a sphere that is uniformly curved toward a centre in a third dimension lying nowhere on the two-dimensional surface, no three-dimensional creature can visualize a closed three-dimensional volume that is uniformly curved toward a centre in a fourth dimension lying nowhere in the three-dimensional volume. Nevertheless, three-dimensional creatures could discover the curvature of their three-dimensional world by performing surveying experiments of sufficient spatial scope. They could draw circles, for example, by tacking down one end of a string and tracing along a single plane the locus described by the other end when the string is always kept taut in between (a straight line) and walked around by a surveyor. In Einstein's universe, if the string were short compared to the quantity R, the circumference of the circle divided by the length of the string (the circle's radius) would nearly equal 2π = 6.2837853 . . . , thereby fooling the three-dimensional creatures into thinking that Euclidean geometry gives a correct description of their world. However, the ratio of circumference to length of string would become less than 2π when the length of string became comparable to R. Indeed, if a string of length πR could be pulled taut to the antipode of a positively curved universe, the ratio would go to zero. In short, at the tacked-down end the string could be seen to sweep out a great arc in the sky from horizon to horizon and back again; yet, to make the string do this, the surveyor at the other end need only walk around a circle of vanishingly small circumference.

      To understand why gravitation can curve space (or more generally, space-time) in such startling ways, consider the following thought experiment that was originally conceived by Einstein. Imagine an elevator in free space accelerating upward, from the viewpoint of a woman in inertial space, at a rate numerically equal to g, the gravitational field at the surface of the Earth. Let this elevator have parallel windows on two sides, and let the woman shine a brief pulse of light toward the windows. She will see the photons enter close to the top of the near window and exit near the bottom of the far window because the elevator has accelerated upward in the interval it takes light to travel across the elevator. For her, photons travel in a straight line, and it is merely the acceleration of the elevator that has caused the windows and floor of the elevator to curve up to the flight path of the photons.

      Let there now be a man standing inside the elevator. Because the floor of the elevator accelerates him upward at a rate g, he may—if he chooses to regard himself as stationary—think that he is standing still on the surface of the Earth and is being pulled to the ground by its gravitational field g. Indeed, in accordance with the equivalence principle, without looking out the windows (the outside is not part of his local environment), he cannot perform any local experiment that would inform him otherwise. Let the woman shine her pulse of light. The man sees, just like the woman, that the photons enter near the top edge of one window and exit near the bottom of the other. And just like the woman, he knows that photons propagate in straight lines in free space. (By the relativity principle, they must agree on the laws of physics if they are both inertial observers.) However, since he actually sees the photons follow a curved path relative to himself, he concludes that they must be bent by the force of gravity. The woman tries to tell him there is no such force at work; he is not an inertial observer. Nonetheless, he has the solidity of the Earth beneath him, so he insists on attributing his acceleration to the force of gravity. According to Einstein, they are both right. There is no need to distinguish locally between acceleration and gravity—the two are in some sense equivalent. But if that is the case, then it must be true that gravity—“real” gravity—can actually bend light. And indeed it can, as many experiments have shown since Einstein's first discussion of the phenomenon.

      It was the genius of Einstein to go even further. Rather than speak of the force of gravitation having bent the photons into a curved path, might it not be more fruitful to think of photons as always flying in straight lines—in the sense that a straight line is the shortest distance between two points—and that what really happens is that gravitation bends space-time? In other words, perhaps gravitation is curved space-time, and photons fly along the shortest paths possible in this curved space-time, thus giving the appearance of being bent by a “force” when one insists on thinking that space-time is flat. The utility of taking this approach is that it becomes automatic that all test bodies fall at the same rate under the “force” of gravitation, for they are merely producing their natural trajectories in a background space-time that is curved in a certain fashion independent of the test bodies. What was a minor miracle for Galileo and Newton becomes the most natural thing in the world for Einstein.

      To complete the program and to conform with Newton's theory of gravitation in the limit of weak curvature (weak field), the source of space-time curvature would have to be ascribed to mass (and energy). The mathematical expression of these ideas constitutes Einstein's theory of general relativity, one of the most beautiful artifacts of pure thought ever produced. The American physicist John Archibald Wheeler and his colleagues summarized Einstein's view of the universe in these terms:

Curved spacetime tells mass-energy how to move;
mass-energy tells spacetime how to curve.

      Contrast this with Newton's view of the mechanics of the heavens:

Force tells mass how to accelerate;
mass tells gravity how to exert force.

      Notice therefore that Einstein's worldview is not merely a quantitative modification of Newton's picture (which is also possible via an equivalent route using the methods of quantum field theory) but represents a qualitative change of perspective. And modern experiments have amply justified the fruitfulness of Einstein's alternative interpretation of gravitation as geometry rather than as force. His theory would have undoubtedly delighted the Greeks.

Relativistic cosmologies
Einstein's model
      To derive his 1917 cosmological model, Einstein made three assumptions that lay outside the scope of his equations. The first was to suppose that the universe is homogeneous and isotropic in the large (i.e., the same everywhere on average at any instant in time), an assumption that the English astrophysicist Edward A. Milne later elevated to an entire philosophical outlook by naming it the cosmological principle. Given the success of the Copernican revolution, this outlook is a natural one. Newton himself had it implicitly in mind in his letter to Bentley (see above) when he took the initial state of the Cosmos to be everywhere the same before it developed “ye Sun and Fixt stars.”

      The second assumption was to suppose that this homogeneous and isotropic universe had a closed spatial geometry. As described in the previous section, the total volume of a three-dimensional space with uniform positive curvature would be finite but possess no edges or boundaries (to be consistent with the first assumption).

      The third assumption made by Einstein was that the universe as a whole is static—i.e., its large-scale properties do not vary with time. This assumption, made before Hubble's observational discovery of the expansion of the universe, was also natural; it was the simplest approach, as Aristotle had discovered, if one wishes to avoid a discussion of a creation event. Indeed, the philosophical attraction of the notion that the universe on average is not only homogeneous and isotropic in space but also constant in time was so appealing that a school of English cosmologists—Hermann Bondi, Fred Hoyle, and Thomas Gold—would call it the perfect cosmological principle and carry its implications in the 1950s to the ultimate refinement in the so-called steady state model.

      To his great chagrin Einstein found in 1917 that with his three adopted assumptions, his equations of general relativity—as originally written down—had no meaningful solutions. To obtain a solution, Einstein realized that he had to add to his equations an extra term, which came to be called the cosmological constant. If one speaks in Newtonian terms, the cosmological constant could be interpreted as a repulsive force of unknown origin that could exactly balance the attraction of gravitation of all the matter in Einstein's closed universe and keep it from moving. The inclusion of such a term in a more general context, however, meant that the universe in the absence of any mass-energy (i.e., consisting of a vacuum) would not have a space-time structure that was flat (i.e., would not have satisfied the dictates of special relativity exactly). Einstein was prepared to make such a sacrifice only very reluctantly, and, when he later learned of Hubble's discovery of the expansion of the universe and realized that he could have predicted it had he only had more faith in the original form of his equations, he regretted the introduction of the cosmological constant as the “biggest blunder” of his life. Ironically, recent theoretical developments in particle physics suggest that in the early universe there may very well have been a nonzero value to the cosmological constant and that this value may be intimately connected with precisely the nature of the vacuum state (see below).

De Sitter's model
      It was also in 1917 that the Dutch astronomer Willem de Sitter recognized that he could obtain a static cosmological model differing from Einstein's simply by removing all matter. The solution remains stationary essentially because there is no matter to move about. If some test particles are reintroduced into the model, the cosmological term would propel them away from each other. Astronomers now began to wonder if this effect might not underlie the recession of the spirals.

Friedmann-Lemaître models
      In 1922 Aleksandr A. Friedmann, a Russian meteorologist and mathematician, and in 1927 Georges Lemaître, the aforementioned Belgian cleric, independently discovered solutions to Einstein's equations that contained realistic amounts of matter. These evolutionary models correspond to big bang cosmologies. Friedmann and Lemaître adopted Einstein's assumption of spatial homogeneity and isotropy (the cosmological principle). They rejected, however, his assumption of time independence and considered both positively curved spaces (“closed” universes) as well as negatively curved spaces (“open” universes). The difference between the approaches of Friedmann and Lemaître is that the former set the cosmological constant equal to zero, whereas the latter retained the possibility that it might have a nonzero value. To simplify the discussion, only the Friedmann models are considered here.

      The decision to abandon a static model meant that the Friedmann models evolve with time. As such, neighbouring pieces of matter have recessional (or contractional) phases when they separate from (or approach) one another with an apparent velocity that increases linearly with increasing distance. Friedmann's models thus anticipated Hubble's law before it had been formulated on an observational basis. It was Lemaître, however, who had the good fortune of deriving the results at the time when the recession of the galaxies was being recognized as a fundamental cosmological observation, and it was he who clarified the theoretical basis for the phenomenon.

      The geometry of space in Friedmann's closed models is similar to that of Einstein's original model; however, there is a curvature to time as well as one to space. Unlike Einstein's model, where time runs eternally at each spatial point on an uninterrupted horizontal line that extends infinitely into the past and future, there is a beginning and end to time in Friedmann's version of a closed universe when material expands from or is recompressed to infinite densities. These instants are called the instants of the “big bang” and the “big squeeze,” respectively. The global space-time diagram for the middle half of the expansion-compression phases can be depicted as a barrel lying on its side. The space axis corresponds again to any one direction in the universe, and it wraps around the barrel. Through each spatial point runs a time axis that extends along the length of the barrel on its (space-time) surface. Because the barrel is curved in both space and time, the little squares in the grid of the curved sheet of graph paper marking the space-time surface are of nonuniform size, stretching to become bigger when the barrel broadens (universe expands) and shrinking to become smaller when the barrel narrows (universe contracts).

      It should be remembered that only the surface of the barrel has physical significance; the dimension off the surface toward the axle of the barrel represents the fourth spatial dimension, which is not part of the real three-dimensional world. The space axis circles the barrel and closes upon itself after traversing a circumference equal to 2πR, where R, the radius of the universe (in the fourth dimension), is now a function of the time t. In a closed Friedmann model, R starts equal to zero at time t = 0 (not shown in barrel diagram), expands to a maximum value at time t = tm (the middle of the barrel), and recontracts to zero (not shown) at time t = 2tm, with the value of tm dependent on the total amount of mass that exists in the universe.

      Imagine now that galaxies reside on equally spaced tick marks along the space axis. Each galaxy on average does not move spatially with respect to its tick mark in the spatial (ringed) direction but is carried forward horizontally by the march of time. The total number of galaxies on the spatial ring is conserved as time changes, and therefore their average spacing increases or decreases as the total circumference 2πR on the ring increases or decreases (during the expansion or contraction phases). Thus, without in a sense actually moving in the spatial direction, galaxies can be carried apart by the expansion of space itself. From this point of view, the recession of galaxies is not a “velocity” in the usual sense of the word. For example, in a closed Friedmann model, there could be galaxies that started, when R was small, very close to the Milky Way system on the opposite side of the universe. Now, 1010 years later, they are still on the opposite side of the universe but at a distance much greater than 1010 light-years away. They reached those distances without ever having had to move (relative to any local observer) at speeds faster than light—indeed, in a sense without having had to move at all. The separation rate of nearby galaxies can be thought of as a velocity without confusion in the sense of Hubble's law, if one wants, but only if the inferred velocity is much less than the speed of light.

      On the other hand, if the recession of the galaxies is not viewed in terms of a velocity, then the cosmological redshift cannot be viewed as a Doppler shift. How, then, does it arise? The answer is contained in the barrel diagram when one notices that, as the universe expands, each small cell in the space-time grid also expands. Consider the propagation of electromagnetic radiation whose wavelength initially spans exactly one cell length (for simplicity of discussion), so that its head lies at a vertex and its tail at one vertex back. Suppose an elliptical galaxy emits such a wave at some time t1. The head of the wave propagates from corner to corner on the little square grids that look locally flat, and the tail propagates from corner to corner one vertex back. At a later time t2, a spiral galaxy begins to intercept the head of the wave. At time t2, the tail is still one vertex back, and therefore the wave train, still containing one wavelength, now spans one current spatial grid spacing. In other words, the wavelength has grown in direct proportion to the linear expansion factor of the universe. Since the same conclusion would have held if n wavelengths had been involved instead of one, all electromagnetic radiation from a given object will show the same cosmological redshift if the universe (or, equivalently, the average spacing between galaxies) was smaller at the epoch of transmission than at the epoch of reception. Each wavelength will have been stretched in direct proportion to the expansion of the universe in between.

      A nonzero peculiar velocity for an emitting galaxy with respect to its local cosmological frame can be taken into account by Doppler-shifting the emitted photons before applying the cosmological redshift factor; i.e., the observed redshift would be a product of two factors. When the observed redshift is large, one usually assumes that the dominant contribution is of cosmological origin. When this assumption is valid, the redshift is a monotonic function of both distance and time during the expansional phase of any cosmological model. Thus, astronomers often use the redshift z as a shorthand indicator of both distance and elapsed time. Following from this, the statement “object X lies at z = a” means that “object X lies at a distance associated with redshift a”; the statement “event Y occurred at redshift z = b” means that “event Y occurred a time ago associated with redshift b.”

      The open Friedmann models differ from the closed models in both spatial and temporal behaviour. In an open universe the total volume of space and the number of galaxies contained in it are infinite. The three-dimensional spatial geometry is one of uniform negative curvature in the sense that, if circles are drawn with very large lengths of string, the ratio of circumferences to lengths of string are greater than 2π. The temporal history begins again with expansion from a big bang of infinite density, but now the expansion continues indefinitely, and the average density of matter and radiation in the universe would eventually become vanishingly small. Time in such a model has a beginning but no end.

The Einstein–de Sitter universe
      In 1932 Einstein and de Sitter proposed that the cosmological constant should be set equal to zero, and they derived a homogeneous and isotropic model that provides the separating case between the closed and open Friedmann models; i.e., Einstein and de Sitter assumed that the spatial curvature of the universe is neither positive nor negative but rather zero. The spatial geometry of the Einstein–de Sitter universe is Euclidean (infinite total volume), but space-time is not globally flat (i.e., not exactly the space-time of special relativity). Time again commences with a big bang and the galaxies recede forever, but the recession rate (Hubble's “constant”) asymptotically coasts to zero as time advances to infinity.

      Because the geometry of space and the gross evolutionary properties are uniquely defined in the Einstein–de Sitter model, many people with a philosophical bent have long considered it the most fitting candidate to describe the actual universe. During the late 1970s strong theoretical support for this viewpoint came from considerations of particle physics (the model of inflation to be discussed below), and mounting, but as yet undefinitive, support also seems to be gathering from astronomical observations.

Bound and unbound universes and the closure density
      The different separation behaviours of galaxies at large time scales in the Friedmann closed and open models and the Einstein–de Sitter model allow a different classification scheme than one based on the global structure of space-time. The alternative way of looking at things is in terms of gravitationally bound and unbound systems: closed models where galaxies initially separate but later come back together again represent bound universes; open models where galaxies continue to separate forever represent unbound universes; the Einstein–de Sitter model where galaxies separate forever but slow to a halt at infinite time represents the critical case.

      The advantage of this alternative view is that it focuses attention on local quantities where it is possible to think in the simpler terms of Newtonian physics—attractive forces, for example. In this picture it is intuitively clear that the feature that should distinguish whether or not gravity is capable of bringing a given expansion rate to a halt depends on the amount of mass (per unit volume) present. This is indeed the case; the Newtonian and relativistic formalisms give the same criterion for the critical, or closure, density (in mass equivalent of matter and radiation) that separates closed or bound universes from open or unbound ones. If Hubble's constant at the present epoch is denoted as H0, then the closure density (corresponding to an Einstein–de Sitter model) equals 3H02/8πG, where G is the universal gravitational constant in both Newton's and Einstein's theories of gravity. If the numerical value of Hubble's constant H0 is 20 kilometres per second per million light-years, then the closure density equals 8 × 10−30 g/cm3, the equivalent of about five hydrogen atoms on average per cubic metre of cosmic space. If the actual cosmic average is greater than this value, the universe is bound (closed) and, though currently expanding, will end in a crush of unimaginable proportion. If it is less, the universe is unbound (open) and will expand forever. The result is intuitively plausible since the smaller the mass density, the smaller the role for gravitation, so the more the universe will approach free expansion (assuming that the cosmological constant is zero).

      The mass in galaxies observed directly, when averaged over cosmological distances, is estimated to be only a few percent of the amount required to close the universe. The amount contained in the radiation field (most of which is in the cosmic microwave background) contributes negligibly to the total at present. If this were all, the universe would be open and unbound. However, the hidden mass that has been deduced from various dynamic arguments multiplies the known amount by factors of a few to 10 or more as one considers phenomena of ever-increasing scale—from galaxies to superclusters. Thus, the total average mass density has been estimated to be 20–40 percent or more of the closure density, and many investigators would like to believe that new observations and refined estimates will eventually bring this number up to 100 percent of closure.

The age of the universe
      An indirect method of inferring whether the universe is bound or unbound involves estimates of the age of the universe. The basic idea is as follows. For a given present rate of expansion (i.e., Hubble's constant), it is clear that the deceleration produced by gravitation must act to make the expansion faster in the past and slower in the future. Thus, the age of the universe (in the absence of a cosmological constant) must always be less than the free expansion age, H0−1, which equals 1.5 × 1010 years. The bigger the role for gravity, the smaller the true age compared to the Hubble time H0−1. Since it can be shown that a matter-dominated Einstein–de Sitter universe has a present age two-thirds that of the Hubble time, or 1010 years, the actual universe (which has been matter-dominated for a long time) is closed if it can be shown to be younger than 1010 years and open if older than that critical value.

      As previously noted, estimates of the ages of globular cluster stars and of the ages of formation of the radioactive elements, which must be at least as old as the universe itself, give ranges of values that are roughly consistent with the critical value. The formal estimates for globular cluster ages, however, seem somewhat too large to be entirely compatible with the critical value, and some people have interpreted this to imply that either the universe is unbound or the cosmological constant is not zero. These sentiments may be premature, since the errors in the determinations are not small. Moreover, until astronomers arrive at a better understanding of the discrepancy concerning predicted and observed solar neutrino emission, it cannot be claimed that the knowledge of all physics relevant to the theory of stellar structure and evolution is completely secure.

Global observational tests
      Since neither local test of average mass density nor age of the oldest accessible objects in the universe has proved decisive in showing whether the universe is bound or unbound, one might investigate large-scale diagnostics of the global structure of space-time to discriminate between closed and open universes. Conducting surveying experiments by means of space exploration of the scope described earlier is of course out of the question. Fortunately, there exist in the universe accessible natural probes with which to explore the deepest reaches of space and time—namely, photons from distant galaxies. To be able to use these probes effectively as diagnostic tools—say, in the apparent-brightness redshift or the angular-size redshift tests of classical observational cosmology—it is important to know the intrinsic properties of the emitting sources and to examine the objects with the largest possible redshifts (so one is going farthest out into space and farthest back in time). Unfortunately, these two goals yield incompatible requirements.

      The problem is that astronomers know the properties of nearby galaxies best—i.e., galaxies as they appear today. The assumption that more distant (and therefore younger) galaxies look the same as they do now becomes more and more suspect as one probes deeper and deeper sources because of the increasing possibility of evolutionary effects (e.g., stellar populations being younger and galaxies not yet having suffered mergers). The difficulty of disentangling the evolutionary effects from the purely cosmological ones remains the biggest obstacle to this line of research. The use of quasars or QSOs fares even worse because, though they are observable at great distances, they have a very large spread in intrinsic luminosities, and they may also suffer from evolutionary effects.

 The phenomena of gravitational lensing of quasars and galaxies into multiple images, arcs, and rings provide novel cosmological probes. For example, the light forming the different images of a lensed quasar travels different ray paths to reach the observer. Intrinsic time variability will therefore result in one image exhibiting a differential time delay with respect to another. Astronomers have exploited the fact that this differential is proportional to the overall size of the system to obtain provisional estimates for the value of the Hubble constant H0. The probability of lensing of a quasar at high redshift, to cite another example, increases as the average mass density (mostly dark matter) in the Cosmos capable of the gravitational bending of light increases. Hence, the statistics of lensing at high redshifts could, in principle, discriminate between open and closed models of the universe. Unfortunately, the modeling of the sources is too uncertain and the detected events are too rare at present to offer decisive tests.

The ultimate fate of the universe
      In the absence of definitive observational conclusions, one can only speculate on the possible fate of the actual universe. If the universe is unbound, the cosmological expansion will not halt, and eventually the galaxies and stars will all die, leaving the Cosmos a cold, dark, and virtually empty place. If the universe is bound, the mass-energy content in the distant but finite future will come together again; the cosmic background radiation will be blueshifted, raising the temperature of matter and radiation to incredible levels, perhaps to reforge everything in the fiery crucible of the big squeeze. Because of the development of structure in previous epochs, the big squeeze may not occur simultaneously everywhere at the end of time as its explosive counterpart, the big bang, seems to have done at the beginning of time. Discussions of recurring cycles of expansions and contractions thus remain highly speculative.

The hot big bang
      Given the measured radiation temperature of 2.735 K, the energy density of the cosmic microwave background can be shown to be about 1,000 times smaller than the average rest-energy density of ordinary matter in the universe. Thus, the current universe is matter-dominated. If one goes back in time to redshift z, the average number densities of particles and photons were both bigger by the same factor (1 + z)3 because the universe was more compressed by this factor, and the ratio of these two numbers would have maintained its current value of about one hydrogen nucleus, or proton, for every 109 photons. The wavelength of each photon, however, was shorter by the factor 1 + z in the past than it is now; therefore, the energy density of radiation increases faster by one factor of 1 + z than the rest-energy density of matter. Thus, the radiation energy density becomes comparable to the energy density of ordinary matter at a redshift of about 1,000. At redshifts larger than 10,000, radiation would have dominated even over the dark matter of the universe. Between these two values radiation would have decoupled from matter when hydrogen recombined. It is not possible to use photons to observe redshifts larger than about 1,500, because the cosmic plasma at temperatures above 4,000 K is essentially opaque before recombination. One can think of the spherical surface as an inverted “photosphere” of the observable universe. This spherical surface of last scattering probably has slight ripples in it that account for the slight anisotropies observed in the cosmic microwave background today. In any case, the earliest stages of the universe's history—for example, when temperatures were 109 K and higher—cannot be examined by light received through any telescope. Clues must be sought by comparing the matter content with theoretical calculations.

      For this purpose, fortunately, the cosmological evolution of model universes is especially simple and amenable to computation at redshifts much larger than 10,000 (or temperatures substantially above 30,000 K) because the physical properties of the dominant component, photons, then are completely known. In a radiation-dominated early universe, for example, the radiation temperature T is very precisely known as a function of the age of the universe, the time t after the big bang.

Primordial nucleosynthesis
      According to the considerations outlined above, at a time t less than 10-4 seconds, the creation of matter-antimatter pairs would have been in thermodynamic equilibrium with the ambient radiation field at a temperature T of about 1012 K. Nevertheless, there was a slight excess of matter particles (e.g., protons) compared to antimatter particles (e.g., antiprotons) of roughly a few parts in 109. This is known because, as the universe aged and expanded, the radiation temperature would have dropped and each antiproton and each antineutron would have annihilated with a proton and a neutron to yield two gamma rays; and later each antielectron would have done the same with an electron to give two more gamma rays. After annihilation, however, the ratio of the number of remaining protons to photons would be conserved in the subsequent expansion to the present day. Since that ratio is known to be one part in 109, it is easy to work out that the original matter-antimatter asymmetry must have been a few parts per 109.

      In any case, after proton-antiproton and neutron-antineutron annihilation but before electron-antielectron annihilation, it is possible to calculate that for every excess neutron there were about five excess protons in thermodynamic equilibrium with one another through neutrino and antineutrino interactions at a temperature of about 1010 K. When the universe reached an age of a few seconds, the temperature would have dropped significantly below 1010 K, and electron-antielectron annihilation would have occurred, liberating the neutrinos and antineutrinos to stream freely through the universe. With no neutrino-antineutrino reactions to replenish their supply, the neutrons would have started to decay with a half-life of 10.6 minutes to protons and electrons (and antineutrinos). However, at an age of 1.5 minutes, well before neutron decay went to completion, the temperature would have dropped to 109 K, low enough to allow neutrons to be captured by protons to form a nucleus of heavy hydrogen, or deuterium. (Before that time, the reaction could still have taken place, but the deuterium nucleus would immediately have broken up under the prevailing high temperatures.) Once deuterium had formed, a very fast chain of reactions set in, quickly assembling most of the neutrons and deuterium nuclei with protons to yield helium nuclei. If the decay of neutrons is ignored, an original mix of 10 protons and two neutrons (one neutron for every five protons) would have assembled into one helium nucleus (two protons plus two neutrons), leaving more than eight protons (eight hydrogen nuclei). This amounts to a helium-mass fraction of 4/12 = 1/3i.e., 33 percent. A more sophisticated calculation that takes into account the concurrent decay of neutrons and other complications yields a helium-mass fraction in the neighbourhood of 25 percent and a hydrogen-mass fraction of 75 percent, which are close to the deduced primordial values from astronomical observations. This agreement provides one of the primary successes of hot big bang theory.

The deuterium abundance
      Not all of the deuterium formed by the capture of neutrons by protons would be further reacted to produce helium. A small residual can be expected to remain, the exact fraction depending sensitively on the density of ordinary matter existing in the universe when the universe was a few minutes old. The problem can be turned around: given measured values of the deuterium abundance (corrected for various effects), what density of ordinary matter needs to be present at a temperature of 109 K so that the nuclear reaction calculations will reproduce the measured deuterium abundance? The answer is known, and this density of ordinary matter can be expanded by simple scaling relations from a radiation temperature of 109 K to one of 2.735 K. This yields a predicted present density of ordinary matter and can be compared with the density inferred to exist in galaxies when averaged over large regions. The two numbers are within a factor of a few of each other. In other words, the deuterium calculation implies that a substantial fraction of all of the ordinary matter in the universe, and perhaps all of it, has already been seen in observable galaxies. Ordinary matter cannot be the hidden mass of the universe unless a large change occurs in present ideas.

The very early universe
Inhomogeneous nucleosynthesis
      One possible modification concerns models of so-called inhomogeneous nucleosynthesis. The idea is that in the very early universe (the first microsecond) the subnuclear particles that later made up the protons and neutrons existed in a free state as a quark-gluon plasma. As the universe expanded and cooled, this quark-gluon plasma would undergo a phase transition and become confined to protons and neutrons (three quarks each). In laboratory experiments of similar phase transitions—for example, the solidification of a liquid into a solid—involving two or more substances, the final state may contain a very uneven distribution of the constituent substances, a fact exploited by industry to purify certain materials. Some astrophysicists have proposed that a similar partial separation of neutrons and protons may have occurred in the very early universe. Local pockets where protons abounded may have few neutrons and vice versa for where neutrons abounded. Nuclear reactions may then have occurred much less efficiently per proton and neutron nucleus than accounted for by standard calculations, and the average density of matter may be correspondingly increased—perhaps even to the point where ordinary matter can close the present-day universe. Unfortunately, calculations carried out under the inhomogeneous hypothesis seem to indicate that conditions leading to the correct proportions of deuterium and helium-4 produce too much primordial lithium-7 to be compatible with measurements of the atmospheric compositions of the oldest stars.

Matter-antimatter asymmetry
      A curious number that appeared in the above discussion was the few parts in 109 asymmetry initially between matter and antimatter (or equivalently, the ratio 10−9 of protons to photons in the present universe). What is the origin of such a number—so close to zero yet not exactly zero?

      At one time the question posed above would have been considered beyond the ken of physics, because the net “baryon” number (for present purposes, protons and neutrons minus antiprotons and antineutrons) was thought to be a conserved quantity. Therefore, once it exists, it always exists, into the indefinite past and future. Developments in particle physics during the 1970s, however, suggested that the net baryon number may in fact undergo alteration. It is certainly very nearly maintained at the relatively low energies accessible in terrestrial experiments, but it may not be conserved at the almost arbitrarily high energies with which particles may have been endowed in the very early universe.

      An analogy can be made with the chemical elements. In the 19th century most chemists believed the elements to be strictly conserved quantities; although oxygen and hydrogen atoms can be combined to form water molecules, the original oxygen and hydrogen atoms can always be recovered by chemical or physical means. However, in the 20th century with the discovery and elucidation of nuclear forces, chemists came to realize that the elements are conserved if they are subjected only to chemical forces (basically electromagnetic in origin); they can be transmuted by the introduction of nuclear forces, which enter characteristically only when much higher energies per particle are available than in chemical reactions.

      In a similar manner it turns out that at very high energies new forces of nature may enter to transmute the net baryon number. One hint that such a transmutation may be possible lies in the remarkable fact that a proton and an electron seem at first sight to be completely different entities, yet they have, as far as one can tell to very high experimental precision, exactly equal but opposite electric charges. Is this a fantastic coincidence, or does it represent a deep physical connection? A connection would obviously exist if it can be shown, for example, that a proton is capable of decaying into a positron (an antielectron) plus electrically neutral particles. Should this be possible, the proton would necessarily have the same charge as the positron, for charge is exactly conserved in all reactions. In turn, the positron would necessarily have the opposite charge of the electron, as it is its antiparticle. Indeed, in some sense the proton (a baryon) can even be said to be merely the “excited” version of an antielectron (an “antilepton”).

      Motivated by this line of reasoning, experimental physicists searched hard during the 1980s for evidence of proton decay. They found none and set a lower limit of 1032 years for the lifetime of the proton if it is unstable. This value is greater than what theoretical physicists had originally predicted on the basis of early unification schemes for the forces of nature (see below). Later versions can accommodate the data and still allow the proton to be unstable. Despite the inconclusiveness of the proton-decay experiments, some of the apparatuses were eventually put to good astronomical use. They were converted to neutrino detectors and provided valuable information on the solar neutrino problem, as well as giving the first positive recordings of neutrinos from a supernova explosion (namely, SN 1987A).

      With respect to the cosmological problem of the matter-antimatter asymmetry, one theoretical approach is founded on the idea of a grand unified theory (GUT), which seeks to explain the electromagnetic, weak nuclear, and strong nuclear forces as a single grand force of nature. This approach suggests that an initial collection of very heavy particles, with zero baryon and lepton number, may decay into many lighter particles (baryons and leptons) with the desired average for the net baryon number (and net lepton number) of a few parts per 109. This event is supposed to have occurred at a time when the universe was perhaps 10−35 second old.

Superunification and the Planck era
      Why should a net baryon fraction initially of zero be more appealing aesthetically than 10−9? The underlying motivation here is perhaps the most ambitious undertaking ever attempted in the history of science—the attempt to explain the creation of truly everything from literally nothing. In other words, is the creation of the entire universe from a vacuum possible?

      The evidence for such an event lies in another remarkable fact. It can be estimated that the total number of protons in the observable universe is an integer 80 digits long. No one of course knows all 80 digits, but for the argument about to be presented, it suffices only to know that they exist. The total number of electrons in the observable universe is also an integer 80 digits long. In all likelihood these two integers are equal, digit by digit—if not exactly, then very nearly so. This inference comes from the fact that, as far as astronomers can tell, the total electric charge in the universe is zero (otherwise electrostatic forces would overwhelm gravitational forces). Is this another coincidence, or does it represent a deeper connection? The apparent coincidence becomes trivial if the entire universe was created from a vacuum since a vacuum has by definition zero electric charge. It is a truism that one cannot get something for nothing. The interesting question is whether one can get everything for nothing. Clearly, this is a very speculative topic for scientific investigation, and the ultimate answer depends on a sophisticated interpretation of what “nothing” means.

      The words “nothing,” “void,” and “vacuum” usually suggest uninteresting empty space. To modern quantum physicists, however, the vacuum has turned out to be rich with complex and unexpected behaviour. They envisage it as a state of minimum energy where quantum fluctuations, consistent with the uncertainty principle of the German physicist Werner Heisenberg, can lead to the temporary formation of particle-antiparticle pairs. In flat space-time, destruction follows closely upon creation (the pairs are said to be virtual) because there is no source of energy to give the pair permanent existence. All the known forces of nature acting between a particle and antiparticle are attractive and will pull the pair together to annihilate one another. In the expanding space-time of the very early universe, however, particles and antiparticles may separate and become part of the observable world. In other words, sharply curved space-time can give rise to the creation of real pairs with positive mass-energy, a fact first demonstrated in the context of black holes by the English astrophysicist Stephen W. Hawking.

      Yet Einstein's picture of gravitation is that the curvature of space-time itself is a consequence of mass-energy. Now, if curved space-time is needed to give birth to mass-energy and if mass-energy is needed to give birth to curved space-time, which came first, space-time or mass-energy? The suggestion that they both rose from something still more fundamental raises a new question: What is more fundamental than space-time and mass-energy? What can give rise to both mass-energy and space-time? No one knows the answer to this question, and perhaps some would argue that the answer is not to be sought within the boundaries of natural science.

      Hawking and the American cosmologist James B. Hartle have proposed that it may be possible to avert a beginning to time by making it go imaginary (in the sense of the mathematics of complex numbers) instead of letting it suddenly appear or disappear. Beyond a certain point in their scheme, time may acquire the characteristic of another spatial dimension rather than refer to some sort of inner clock. Another proposal states that, when space and time approach small enough values (the Planck values; see below), quantum effects make it meaningless to ascribe any classical notions to their properties. The most promising approach to describe the situation comes from the theory of “superstrings.”

      Superstrings represent one example of a class of attempts, generically classified as superunification theory, to explain the four known forces of nature—gravitational, electromagnetic, weak, and strong—on a single unifying basis. Common to all such schemes are the postulates that quantum principles and special relativity underlie the theoretical framework. Another common feature is supersymmetry, the notion that particles with half-integer values of the spin angular momentum (fermions) can be transformed into particles with integer spins (bosons).

      The distinguishing feature of superstring theory is the postulate that elementary particles are not mere points in space but have linear extension. The characteristic linear dimension is given as a certain combination of the three most fundamental constants of nature: (1) Planck's constant h (named after the German physicist Max Planck, the founder of quantum physics), (2) the speed of light c, and (3) the universal gravitational constant G. The combination, called the Planck length (Gh/c3)1/2, equals roughly 10−33 cm, far smaller than the distances to which elementary particles can be probed in particle accelerators on the Earth.

      The energies needed to smash particles to within a Planck length of each other were available to the universe at a time equal to the Planck length divided by the speed of light. This time, called the Planck time (Gh/c5)1/2, equals approximately 10−43 second. At the Planck time, the mass density of the universe is thought to approach the Planck density, c5/hG2, roughly 1093 g/cm3. Contained within a Planck volume is a Planck mass (hc/G)1/2, roughly 10−5 g. An object of such mass would be a quantum black hole, with an event horizon close to both its own Compton length (distance over which a particle is quantum mechanically “fuzzy”) and the size of the cosmic horizon at the Planck time. Under such extreme conditions, space-time cannot be treated as a classical continuum and must be given a quantum interpretation.

      The latter is the goal of the superstring theory, which has as one of its features the curious notion that the four space-time dimensions (three space dimensions plus one time dimension) of the familiar world may be an illusion. Real space-time, in accordance with this picture, has 26 or 10 space-time dimensions, but all of these dimensions except the usual four are somehow compacted or curled up to a size comparable to the Planck scale. Thus has the existence of these other dimensions escaped detection. It is presumably only during the Planck era, when the usual four space-time dimensions acquire their natural Planck scales, that the existence of what is more fundamental than the usual ideas of mass-energy and space-time becomes fully revealed. Unfortunately, attempts to deduce anything more quantitative or physically illuminating from the theory have bogged down in the intractable mathematics of this difficult subject. At the present time superstring theory remains more of an enigma than a solution.

      One of the more enduring contributions of particle physics to cosmology is the prediction of inflation by the American physicist Alan Guth and others. The basic idea is that at high energies matter is better described by fields than by classical means. The contribution of a field to the energy density (and therefore the mass density) and the pressure of the vacuum state need not have been zero in the past, even if it is today. During the time of superunification (Planck era, 10−43 second) or grand unification (GUT era, 10−35 second), the lowest-energy state for this field may have corresponded to a “false vacuum,” with a combination of mass density and negative pressure that results gravitationally in a large repulsive force. In the context of Einstein's theory of general relativity, the false vacuum may be thought of alternatively as contributing a cosmological constant about 10100 times larger than it can possibly be today. The corresponding repulsive force causes the universe to inflate exponentially, doubling its size roughly once every 10−43 or 10−35 second. After at least 85 doublings, the temperature, which started out at 1032 or 1028 K, would have dropped to very low values near absolute zero. At low temperatures the true vacuum state may have lower energy than the false vacuum state, in an analogous fashion to how solid ice has lower energy than liquid water. The supercooling of the universe may therefore have induced a rapid phase transition from the false vacuum state to the true vacuum state, in which the cosmological constant is essentially zero. The transition would have released the energy differential (akin to the “latent heat” released by water when it freezes), which reheats the universe to high temperatures. From this temperature bath and the gravitational energy of expansion would then have emerged the particles and antiparticles of noninflationary big bang cosmologies.

      Cosmic inflation serves a number of useful purposes. First, the drastic stretching during inflation flattens any initial space curvature, and so the universe after inflation will look exceedingly like an Einstein–de Sitter universe. Second, inflation so dilutes the concentration of any magnetic monopoles appearing as “topological knots” during the GUT era that their cosmological density will drop to negligibly small and acceptable values. Finally, inflation provides a mechanism for understanding the overall isotropy of the microwave background because the matter and radiation of the entire observable universe were in good thermal contact (within the cosmic event horizon) before inflation and therefore acquired the same thermodynamic characteristics. Rapid inflation carried different portions outside their individual event horizons. When inflation ended and the universe reheated and resumed normal expansion, these different portions, through the natural passage of time, reappeared on our horizon. And through the observed isotropy of the cosmic microwave background, they are inferred still to have the same temperatures. Finally, slight anisotropies in the cosmic microwave background occurred because of quantum fluctuations in the mass density. The amplitudes of these small (adiabatic) fluctuations remained independent of comoving scale during the period of inflation. Afterward they grew gravitationally by a constant factor until the recombination era. Cosmic microwave photons seen from the last scattering surface should therefore exhibit a scale-invariant spectrum of fluctuations, which is exactly what the COBE investigators claim they observed.

      As influential as inflation has been in guiding modern cosmological thought, it has not resolved all internal difficulties. The most serious concerns the problem of a “graceful exit.” Unless the effective potential describing the effects of the inflationary field during the GUT era corresponds to an extremely gently rounded hill (from whose top the universe rolls slowly in the transition from the false vacuum to the true vacuum), the exit to normal expansion will generate so much turbulence and inhomogeneity (via violent collisions of “domain walls” that separate bubbles of true vacuum from regions of false vacuum) as to make inexplicable the small observed amplitudes for the anisotropy of the cosmic microwave background radiation. Arranging a tiny enough slope for the effective potential requires a degree of fine-tuning that most cosmologists find philosophically objectionable.

Steady state theory and other alternative cosmologies
      Big bang cosmology, augmented by the ideas of inflation, remains the theory of choice among nearly all astronomers, but, apart from the difficulties discussed above, no consensus has been reached concerning the origin in the cosmic gas of fluctuations thought to produce the observed galaxies, clusters, and superclusters. Most astronomers would interpret these shortcomings as indications of the incompleteness of the development of the theory, but it is conceivable that major modifications are needed.

      An early problem encountered by big bang theorists was an apparent large discrepancy between the Hubble time and other indicators of cosmic age. This discrepancy was resolved by revision of Hubble's original estimate for H0, which was about an order of magnitude too large owing to confusion between Population I and II variable stars and between H II regions and bright stars. However, the apparent difficulty motivated Bondi, Hoyle, and Gold to offer the alternative theory of steady state cosmology in 1948.

      By that year, of course, the universe was known to be expanding; therefore, the only way to explain a constant (steady state) matter density was to postulate the continuous creation of matter to offset the attenuation caused by the cosmic expansion. This aspect was physically very unappealing to many people, who consciously or unconsciously preferred to have all creation completed in virtually one instant in the big bang. In the steady state theory the average age of matter in the universe is one-third the Hubble time, but any given galaxy could be older or younger than this mean value. Thus, the steady state theory had the virtue of making very specific predictions, and for this reason it was vulnerable to observational disproof.

      The first blow was delivered by Ryle's counts of extragalactic radio sources during the 1950s and '60s. These counts involved the same methods discussed above for the star counts by Kapteyn and the galaxy counts by Hubble except that radio telescopes were used. Ryle found more radio galaxies at large distances from the Earth than can be explained under the assumption of a uniform spatial distribution no matter which cosmological model was assumed, including that of steady state. This seemed to imply that radio galaxies must evolve over time in the sense that there were more powerful sources in the past (and therefore observable at large distances) than there are at present. Such a situation contradicts a basic tenet of the steady state theory, which holds that all large-scale properties of the universe, including the population of any subclass of objects like radio galaxies, must be constant in time.

      The second blow came in 1965 with the discovery of the cosmic microwave background radiation. Though it has few adherents today, the steady state theory is credited as having been a useful idea for the development of modern cosmological thought as it stimulated much work in the field.

      At various times, other alternative theories have also been offered as challenges to the prevailing view of the origin of the universe in a hot big bang: the cold big bang theory (to account for galaxy formation), symmetric matter-antimatter cosmology (to avoid an asymmetry between matter and antimatter), variable G cosmology (to explain why the gravitational constant is so small), tired-light cosmology (to explain redshift), and the notion of shrinking atoms in a nonexpanding universe (to avoid the singularity of the big bang). The motivation behind these suggestions is, as indicated in the parenthetical comments, to remedy some perceived problem in the standard picture. Yet, in most cases, the cure offered is worse than the disease, and none of the mentioned alternatives has gained much of a following. The hot big bang theory has ascended to primacy because, unlike its many rivals, it attempts to address not isolated individual facts but a whole panoply of cosmological issues. And, although some sought-after results remain elusive, no glaring weakness has yet been uncovered.

Summary
      The history of human thought on the nature of the Cosmos offers a number of remarkable lessons, the most striking of which is that the architecture of the universe is open to reason. The plan is intricate and subtle, and each glimpse of another layer has led philosophers and scientists to a deeper mental image of the physical world. These images have surprising clarity and coherence—from the view of the Cosmos as geometry by the Greeks to the mechanistic clockwork of the Newtonian universe to the quirky subatomic “dance” of quantum particles and fields to a geometric worldview with a relativistic and quantum twist. Each generation has had members who thought that they had found the path that would penetrate to the centre of innermost truth. The present generation is no different, but is there any real reason to believe that the process has stopped with its conclusions?

      Yet, incomplete though it may be, the scope of modern scientific understanding of the Cosmos is truly dazzling. It envisages that four fundamental forces, along with matter-energy and space itself, emerged in a big bang. Forged in the heat of the primeval fireball were the two simplest elements, hydrogen and helium. As the fireball expanded and cooled, the dominance of gravity over matter led to the birth of galaxies and stars. As the stars evolved, hydrogen and helium were molded into the heavy elements, which were subsequently spewed into interstellar space by titanic explosions that occurred with the death of massive stars. The enriched debris mixed with the gas of interstellar clouds, which collected into cool dense pockets and formed new generations of stars. At the outskirts of a spiral galaxy, the gravitational collapse of a rotating molecular cloud core resulted in the formation of the Sun, surrounded by a spinning disk of gas and dust. The dust, composed of the heavy elements produced inside stars, accumulated to form planetary cores of rock and ice. One such planet was fortunate enough to have water in all three phases; and carbon chemistry in the liquid oceans of that planet gave rise to living organisms that evolved and eventually conquered the land. The most intelligent of these land animals looked up at the sky and saw the planets and the stars, and in wonderment pondered the underlying plan of the Cosmos.

Additional Reading

General works
Review articles on a wide variety of modern astronomy and astrophysics topics written for the scientifically literate are found in Stephen P. Maran (ed.), The Astronomy and Astrophysics Encyclopedia (1992). Topical surveys of more limited scope are available in the Harvard Books on Astronomy series, especially such titles as Lawrence H. Aller, Atoms, Stars, and Nebulae, 3rd ed. (1991); Bart J. Bok and Priscilla F. Bok, The Milky Way, 5th ed. (1981); and Wallace Tucker and Riccardo Giacconi, The X-Ray Universe (1985). There are many introductory astronomy textbooks available that suppose little mathematical sophistication on the part of the reader; one of the most comprehensive is George O. Abell, David Morrison, and Sidney C. Wolff, Exploration of the Universe, 6th ed. (1991). An introduction that begins with the big bang and works forward in time is Donald Goldsmith, The Evolving Universe, 2nd ed. (1985). At a somewhat more advanced level is Frank H. Shu, The Physical Universe: An Introduction to Astronomy (1982).

History of astronomy
The standard reference is A. Pannekoek, A History of Astronomy (1961, reissued 1989; originally published in Dutch, 1951). Excellent accounts of early ideas can be found in J.L.E. Dreyer, A History of Astronomy from Thales to Kepler, 2nd ed. (1953); and Giorgio De Santillana, The Origins of Scientific Thought (1961, reissued 1970). A historical account of our understanding of galaxies and the extragalactic universe is Timothy Ferris, Coming of Age in the Milky Way (1988). William Sheehan, Worlds in the Sky (1992), summarizes our current understanding of the solar system.

Planets
Useful summaries are found in Bruce Murray (ed.), The Planets (1983), a collection of Scientific American articles. Also recommended is J. Kelly Beatty and Andrew Chaikin (eds.), The New Solar System, 3rd ed. (1990). The relationship of the origin of the solar system to theories of star formation is discussed at a technical level in David C. Black and Mildred Shapley Matthews (eds.), Protostars and Planets II (1985).

Stars and other cosmic components
A very readable work on stellar evolution is Robert Jastrow, Red Giants and White Dwarfs, new ed. (1990). Martin Cohen, In Darkness Born: The Story of Star Formation (1988), summarizes the processes of star formation. A classic text is Martin Schwarzschild, Structure and Evolution of the Stars (1958, reissued 1965). Stellar nucleosynthesis is the emphasis of Donald D. Clayton, Principles of Stellar Evolution and Nucleosynthesis (1968, reprinted 1983). Stan Woosley and Tom Weaver, “The Great Supernova of 1987,” Scientific American, 261(2):32–40 (August 1989), is a popular review. The properties of gravitationally compact stellar remnants are discussed by Stuart L. Shapiro and Saul A. Teukolsky, Black Holes, White Dwarfs, and Neutron Stars (1983). Harry L. Shipman, Black Holes, Quasars, and the Universe, 2nd ed. (1980), is a more elementary treatment. Michael W. Friedlander, Cosmic Rays (1989), is an introduction.

Galaxies
Beautiful photographs of galaxies together with nontechnical commentary are contained in Timothy Ferris, Galaxies (1980). Equally enjoyable for the amateur and professional alike are Allan Sandage, The Hubble Atlas of Galaxies (1961); Halton Arp, Atlas of Peculiar Galaxies (1966, reprinted 1978); and Allan Sandage and G.A. Tammann, A Revised Shapley-Ames Catalog of Bright Galaxies, 2nd ed. (1987). An observational account of current ideas on the formation of our own galaxy is found in Sidney van den Bergh and James E. Hesser, “How the Milky Way Formed,” Scientific American, 268(1):72–78 (January 1993). Extragalactic astronomy is discussed at a level appropriate for professionals in Allan Sandage, Mary Sandage, and Jerome Kristian (eds.), Galaxies and the Universe (1975, reprinted 1982); S.M. Fall and D. Lynden-Bell (eds.), The Structure and Evolution of Normal Galaxies (1981); and C. Hazard and Simon Mitton (eds.), Active Galactic Nuclei (1979). The problems of galaxy formation or galaxy clustering are described by Joseph Silk, The Big Bang, rev. and updated ed. (1989); and by P.J.E. Peebles, The Large-Scale Structure of the Universe (1980).

Cosmology
Several excellent semipopular accounts are available: Timothy Ferris, The Red Limit: The Search for the Edge of the Universe, 2nd rev. ed. (1983); Steven Weinberg, The First Three Minutes: A Modern View of the Origin of the Universe, updated ed. (1988); Nigel Calder, Einstein's Universe (1979, reissued 1982); Edward R. Harrison, Cosmology, the Science of the Universe (1981); Robert V. Wagoner and Donald W. Goldsmith, Cosmic Horizons (1982); and John Barrow and Joseph Silk, The Left Hand of Creation: The Origin and Evolution of the Expanding Universe (1983). Michael Rowan-Robinson, The Cosmological Distance Ladder (1985), provides a detailed discussion of how astronomers measure distances to galaxies and quasars. Stephen W. Hawking, A Brief History of Time (1988), is a discussion by a modern scientific icon on gravitation theory, black holes, and cosmology. Standard textbooks on general relativity and cosmology include P.J.E. Peebles, Physical Cosmology (1971); Steven Weinberg, Gravitation and Cosmology (1972); and Charles W. Misner, Kip S. Thorne, and John Archibald Wheeler, Gravitation (1973). The interface between particle physics and cosmology is the concern of G.W. Gibbons, Stephen W. Hawking, and S.T.C. Siklos (eds.), The Very Early Universe (1983). One of the best semipopular introductions to the modern attempts to unify the fundamental forces is P.C.W. Davies, The Forces of Nature, 2nd ed. (1986).Frank H. Shu

▪ plant genus
      genus of garden plants of the family Asteraceae, containing about 25 species native to tropical America. They have leaves opposite each other on the stem and heads of flowers that are borne along on long flower stalks or together in an open cluster.

      The disk flowers are red or yellow; the ray flowers, sometimes notched, may be white, pink, red, purple, or other colours. The common garden cosmos, from which most annual ornamental varieties have been developed, is C. bipinnatus.

* * *


Universalium. 2010.

Игры ⚽ Поможем сделать НИР
Synonyms:

Look at other dictionaries:

  • cosmos — [ kɔsmos ] n. m. • 1847; gr. kosmos « bon ordre; ordre de l univers » 1 ♦ Philos. L univers considéré comme un système bien ordonné. 2 ♦ (d apr. le russe) Espace extraterrestre. Envoyer une fusée dans le cosmos (⇒ cosmodrome, cosmonaute) . ●… …   Encyclopédie Universelle

  • Cosmos — Saltar a navegación, búsqueda Para otros usos de este término, véase Cosmos (desambiguación). En su sentido más general, un cosmos es un sistema ordenado o armonioso. Se origina del termino griego κόσμος , que significa orden u ornamentos, y es… …   Wikipedia Español

  • Cosmos-1 — Cosmos (lanceur) Un lanceur Cosmos 3M Cosmos (en russe Космос) est le nom d une prolifique famille de lanceurs à deux étages russes. Le modèle le plus connu est Cosmos 3M qui est en service depuis 1967. Histoire Le premier étage de ce lanceur est …   Wikipédia en Français

  • Cosmos-3M — Cosmos (lanceur) Un lanceur Cosmos 3M Cosmos (en russe Космос) est le nom d une prolifique famille de lanceurs à deux étages russes. Le modèle le plus connu est Cosmos 3M qui est en service depuis 1967. Histoire Le premier étage de ce lanceur est …   Wikipédia en Français

  • Cosmos X2 — Developer(s) Saturnine Games Publisher(s) Saturnine Games Platform(s) …   Wikipedia

  • Cosmos S.A. — Cosmos S.A. Saltar a navegación, búsqueda Cosmos S.A. IATA OACI KMS …   Wikipedia Español

  • Cosmos 99 — Cosmos 1999 Pour les articles homonymes, voir Cosmos. Cosmos 1999 Logo de la première saison de Cosmos 1999 Titre original …   Wikipédia en Français

  • Cosmos 97 — Estado Reentrado en la atmósfera[1] …   Wikipedia Español

  • Cosmos 76 — Estado Reentrado en la atmósfera Fecha de lanzamiento 23 de julio de 1965 Vehículo de lanzamiento Kosmos 2I Sitio de lanzamiento Cosmódromo de Kapustin Yar …   Wikipedia Español

  • Cosmos 17 — Estado Reentrado en la atmósfera Fecha de lanzamiento 22 de mayo de 1963 Vehículo de lanzamiento Kosmos 2I Sitio de lanzamiento Kapustin Yar …   Wikipedia Español

  • Cosmos 36 — Estado Reentrado en la atmósfera Fecha de lanzamiento 30 de julio de 1964 Vehículo de lanzamiento Kosmos 2I Sitio de lanzamiento Cosmódromo de Kapustin Yar …   Wikipedia Español

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”